Обсуждение: Performance issues with compaq server
I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3
Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)
I appreciate any help.
Regards,
Samuel
On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote: > I tried to load data using insert statement to our new database server > (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. > The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3 > > Does anybody know any idea to what cause this problem ? or things that I > need to look or check (in the config. file) Put them in a trasaction (begin/commit). Without that, each insert becomes it's own transaction which is rather expensive. Postgresql 7.2 improves this a bit but the transaction will help anyway. HTH, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Canada, Mexico, and Australia form the Axis of Nations That > Are Actually Quite Nice But Secretly Have Nasty Thoughts About America
On Wed, 8 May 2002, Martijn van Oosterhout wrote: > On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote: > > > I tried to load data using insert statement to our new database server > > (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. > > The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3 > > > > Does anybody know any idea to what cause this problem ? or things that I > > need to look or check (in the config. file) > > Put them in a trasaction (begin/commit). Without that, each insert becomes > it's own transaction which is rather expensive. Postgresql 7.2 improves this > a bit but the transaction will help anyway. ACK. On a given hardware I get about 150 inserts per second. Using a begin/end transaction for a group of 100 inserts speeds it up to about 450 inserts per second. But beware: if one insert fails (duplicate key, faulty data) then you have to re-insert the remaining rows as single transactions, else all rows of the previous transaction are discarded. -- PGP/GPG Key-ID: http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
Holger Marzen <holger@marzen.de> writes: > ACK. On a given hardware I get about 150 inserts per second. Using a > begin/end transaction for a group of 100 inserts speeds it up to about > 450 inserts per second. COPY is even faster as there is less query parsing to be done, plus you get a transaction per COPY statement even without BEGIN/END. > But beware: if one insert fails (duplicate key, faulty data) then you > have to re-insert the remaining rows as single transactions, else all > rows of the previous transaction are discarded. Hmm don't you have to ROLLBACK and redo the whole transaction without the offending row(s), since you can't commit while in ABORT state? Or am I misunderstanding? -Doug
Hi all, This is a simple question, but for the life of me, I can't see what is wrong with this query: select host,slot,port,count(mac) as d from cam group by host,slot,port having d>1; I'm running 7.2. Thanx in advance, -- Mike Diehl Network Tools Devl. Sandia Labs (505) 284-3137
On Wednesday 08 May 2002 22:58, Mike Diehl wrote: > Hi all, > > This is a simple question, but for the life of me, I can't see what is > wrong with this query: > > select host,slot,port,count(mac) as d from cam group by host,slot,port > having d>1; Try: select host,slot,port,count(mac) as d from cam group by host,slot,port having count(mac) > 1 See: http://www.postgresql.org/idocs/index.php?sql-select.html "output_name Specifies another name for an output column using the AS clause. This name is primarily used to label the column for display.It can also be used to referto the column's value in ORDER BY and GROUP BY clauses. But the output_name cannot beused in the WHERE or HAVING clauses; write out the expression instead." HTH Ian Barwick
On 8 May 2002, Doug McNaught wrote: > Holger Marzen <holger@marzen.de> writes: > > > ACK. On a given hardware I get about 150 inserts per second. Using a > > begin/end transaction for a group of 100 inserts speeds it up to about > > 450 inserts per second. > > COPY is even faster as there is less query parsing to be done, plus > you get a transaction per COPY statement even without BEGIN/END. Yes, but I wanted to change something in some rows, so I used perl and insert. > > But beware: if one insert fails (duplicate key, faulty data) then you > > have to re-insert the remaining rows as single transactions, else all > > rows of the previous transaction are discarded. > > Hmm don't you have to ROLLBACK and redo the whole transaction without > the offending row(s), since you can't commit while in ABORT state? Or > am I misunderstanding? Postgres complains and doesn't accept the following inserts after a failed one until end of transaction. I didn't have the time yet to figure out if it rolls back the preceeding inserts. Is there a rule in SQL standards that describes what should happen if some statemens in a transaction fail and the program issues a commit? -- PGP/GPG Key-ID: http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
On Wed, 8 May 2002, Holger Marzen wrote: > > COPY is even faster as there is less query parsing to be done, plus > > you get a transaction per COPY statement even without BEGIN/END. > > Yes, but I wanted to change something in some rows, so I used perl and > insert. Why not use Perl and COPY? cjs -- Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org Don't you know, in this new Dark Age, we're all light. --XTC
On Thu, 9 May 2002, Curt Sampson wrote: > On Wed, 8 May 2002, Holger Marzen wrote: > > > > COPY is even faster as there is less query parsing to be done, plus > > > you get a transaction per COPY statement even without BEGIN/END. > > > > Yes, but I wanted to change something in some rows, so I used perl and > > insert. > > Why not use Perl and COPY? A good idea. I'll try it if the customer complains about the speed. -- PGP/GPG Key-ID: http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
Yep, this fixed it. Thanx, On Wednesday 08 May 2002 05:13 pm, Ian Barwick wrote: > On Wednesday 08 May 2002 22:58, Mike Diehl wrote: > > Hi all, > > > > This is a simple question, but for the life of me, I can't see what > > is wrong with this query: > > > > select host,slot,port,count(mac) as d from cam group by > > host,slot,port having d>1; > > Try: > > select host,slot,port,count(mac) as d from cam group by host,slot,port > having count(mac) > 1 > > See: > > http://www.postgresql.org/idocs/index.php?sql-select.html > > "output_name > > Specifies another namefor an output column using the AS clause. This > name is primarily used to label the column for display. It can alsobe > used to refer to the column's value in ORDER BY and GROUP BY clauses. > But the output_name cannot be usedin the WHERE or HAVING clauses; > write out the expression instead." > > HTH > > Ian Barwick -- Mike Diehl Network Tools Devl. Sandia Labs (505) 284-3137
The Compaq servers you indicated use the smart array controller which is very very slow in storing data into hard disks. Are you using the RAID? I suggest you to buy a controller with a larger write cache (the smart array controller does not have w-cache)... You should note a large performance progress... Also do not forget to set fsync to false and to use the COPY command whenever is possible... Bye! -- Doct. Eng. Denis Gasparin: denis@edistar.com --------------------------- Programmer & System Administrator - Edistar srl Il mer, 2002-05-08 alle 00:40, Samuel J. Sutjiono ha scritto: > I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) butit was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3 > > Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file) > > I appreciate any help. > > Regards, > Samuel