Обсуждение: improving write performance for logging application

Поиск
Список
Период
Сортировка

improving write performance for logging application

От
Steve Eckmann
Дата:
I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time simulation.
Wefound that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225 objects/second using
InnoDBtables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system: quad-Itanium2, 8GB memory,
SCSIRAID, GigE connection from simulation server, nothing running except system processes and database system under
test)

We also found that we could improve MySQL performance significantly using MySQL's "INSERT" command extension allowing
multiplevalue-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second.
PostgreSQLdoesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows
arebeing generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the
applicationand uses COPY to write to the database. The application currently has two processes: the simulation and a
datacollector that reads events from the sim (queued in shared memory) and writes them as rows to the database,
bufferingas needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our
datacollector into two processes communicating via a pipe. 

Query performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the kinds
ofqueries our users make. The crux is writing rows to the database fast enough to keep up with the simulation. 

Are there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've found
includedisabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but these
aren'tlikely to produce the 3x improvement that we need. On the client side I've found only two suggestions: disable
autocommitand use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to several
hundredINSERT commands in each PQexec() call, and it isn’t clear that COPY is worth the effort in our application. 

Thanks.


Re: improving write performance for logging application

От
Tom Lane
Дата:
Steve Eckmann <eckmann@computer.org> writes:
> We also found that we could improve MySQL performance significantly
> using MySQL's "INSERT" command extension allowing multiple value-list
> tuples in a single command; the rate for MyISAM tables improved to
> about 2600 objects/second. PostgreSQL doesn't support that language
> extension. Using the COPY command instead of INSERT might help, but
> since rows are being generated on the fly, I don't see how to use COPY
> without running a separate process that reads rows from the
> application and uses COPY to write to the database.

Can you conveniently alter your application to batch INSERT commands
into transactions?  Ie

    BEGIN;
    INSERT ...;
    ... maybe 100 or so inserts ...
    COMMIT;
    BEGIN;
    ... lather, rinse, repeat ...

This cuts down the transactional overhead quite a bit.  A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.

            regards, tom lane

Re: improving write performance for logging application

От
"Steinar H. Gunderson"
Дата:
On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:
> Are there general guidelines for tuning the PostgreSQL server for this kind
> of application? The suggestions I've found include disabling fsync (done),

Are you sure you really want this? The results could be catastrophic in case
of a crash.

> On the client side I've found only two suggestions: disable autocommit and
> use COPY instead of INSERT. I think I've effectively disabled autocommit by
> batching up to several hundred INSERT commands in each PQexec() call, and
> it isn’t clear that COPY is worth the effort in our application.

I'm a bit confused here: How can you batch multiple INSERTs into large
statements for MySQL, but not batch multiple INSERTs into COPY statements for
PostgreSQL?

Anyhow, putting it all inside one transaction (or a few) is likely to help
quite a lot, but of course less when you have fsync=false. Bunding multiple
statements in each PQexec() call won't really give you that; you'll have to
tell the database so explicitly.

/* Steinar */
--
Homepage: http://www.sesse.net/

Re: improving write performance for logging application

От
dlang
Дата:

On Tue, 3 Jan 2006, Tom Lane wrote:

> Steve Eckmann <eckmann@computer.org> writes:
> > We also found that we could improve MySQL performance significantly
> > using MySQL's "INSERT" command extension allowing multiple value-list
> > tuples in a single command; the rate for MyISAM tables improved to
> > about 2600 objects/second. PostgreSQL doesn't support that language
> > extension. Using the COPY command instead of INSERT might help, but
> > since rows are being generated on the fly, I don't see how to use COPY
> > without running a separate process that reads rows from the
> > application and uses COPY to write to the database.
>
> Can you conveniently alter your application to batch INSERT commands
> into transactions?  Ie
>
>     BEGIN;
>     INSERT ...;
>     ... maybe 100 or so inserts ...
>     COMMIT;
>     BEGIN;
>     ... lather, rinse, repeat ...
>
> This cuts down the transactional overhead quite a bit.  A downside is
> that you lose multiple rows if any INSERT fails, but then the same would
> be true of multiple VALUES lists per INSERT.

Steve, you mentioned that you data collector buffers the data before
sending it to the database, modify it so that each time it goes to send
things to the database you send all the data that's in the buffer as a
single transaction.

I am working on useing postgres to deal with log data and wrote a simple
perl script that read in the log files a line at a time, and then wrote
them 1000 at a time to the database. On a dual Opteron 240 box with 2G of
ram 1x 15krpm SCSI drive (and a untuned postgress install with the compile
time defaults) I was getting 5000-8000 lines/sec (I think this was with
fsync disabled, but I don't remember for sure). and postgres was
complaining that it was overrunning it's log sizes (which limits the speed
as it then has to pause to flush the logs)

the key thing is to send multiple lines with one transaction as tom shows
above.

David Lang


Re: improving write performance for logging application

От
Ian Westmacott
Дата:
We have a similar application thats doing upwards of 2B inserts
per day.  We have spent a lot of time optimizing this, and found the
following to be most beneficial:

1)  use COPY (BINARY if possible)
2)  don't use triggers or foreign keys
3)  put WAL and tables on different spindles (channels if possible)
4)  put as much as you can in each COPY, and put as many COPYs as
    you can in a single transaction.
5)  watch out for XID wraparound
6)  tune checkpoint* and bgwriter* parameters for your I/O system

On Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:
> I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time
simulation.We found that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225
objects/secondusing InnoDB tables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system:
quad-Itanium2,8GB memory, SCSI RAID, GigE connection from simulation server, nothing running except system processes
anddatabase system under test) 
>
> We also found that we could improve MySQL performance significantly using MySQL's "INSERT" command extension allowing
multiplevalue-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second.
PostgreSQLdoesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows
arebeing generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the
applicationand uses COPY to write to the database. The application currently has two processes: the simulation and a
datacollector that reads events from the sim (queued in shared memory) and writes them as rows to the database,
bufferingas needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our
datacollector into two processes communicating via a pipe. 
>
> Query performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the
kindsof queries our users make. The crux is writing rows to the database fast enough to keep up with the simulation. 
>
> Are there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've
foundinclude disabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but
thesearen't likely to produce the 3x improvement that we need. On the client side I've found only two suggestions:
disableautocommit and use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to
severalhundred INSERT commands in each PQexec() call, and it isn’t clear that COPY is worth the effort in our
application.
>
> Thanks.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
--
Ian Westmacott <ianw@intellivid.com>
Intellivid Corp.


Re: improving write performance for logging application

От
Steve Eckmann
Дата:
Tom Lane wrote:
Steve Eckmann <eckmann@computer.org> writes: 
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in a single command; the rate for MyISAM tables improved to
about 2600 objects/second. PostgreSQL doesn't support that language
extension. Using the COPY command instead of INSERT might help, but
since rows are being generated on the fly, I don't see how to use COPY
without running a separate process that reads rows from the
application and uses COPY to write to the database.   
Can you conveniently alter your application to batch INSERT commands
into transactions?  Ie
BEGIN;INSERT ...;... maybe 100 or so inserts ...COMMIT;BEGIN;... lather, rinse, repeat ...

This cuts down the transactional overhead quite a bit.  A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.
		regards, tom lane 
Thanks for the suggestion, Tom. Yes, I think I could do that. But I thought what I was doing now was effectively the same, because the PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to include multiple SQL commands (separated by semicolons) in the command string. Multiple queries sent in a single PQexec call are processed in a single transaction...." Our simulation application has nearly 400 event types, each of which is a C++ class for which we have a corresponding database table. So every thousand events or so I issue one PQexec() call for each event type that has unlogged instances, sending INSERT commands for all instances. For example,

    PQexec(dbConn, "INSERT INTO FlyingObjectState VALUES (...); INSERT INTO FlyingObjectState VALUES (...); ...");

My thought was that this would be a good compromise between minimizing transactions (one per event class per buffering interval instead of one per event) and minimizing disk seeking (since all queries in a single transaction insert rows into the same table). Am I overlooking something here? One thing I haven't tried is increasing the buffering interval from 1000 events to, say, 10,000. It turns out that 1000 is a good number for Versant, the object database system we're replacing, and for MySQL, so I assumed it would be a good number for PostgreSQL, too.

Regards,  Steve

Re: improving write performance for logging application

От
Steve Eckmann
Дата:
Steinar H. Gunderson wrote:
On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote: 
Are there general guidelines for tuning the PostgreSQL server for this kind 
of application? The suggestions I've found include disabling fsync (done),   
Are you sure you really want this? The results could be catastrophic in case
of a crash.
 
On the client side I've found only two suggestions: disable autocommit and 
use COPY instead of INSERT. I think I've effectively disabled autocommit by 
batching up to several hundred INSERT commands in each PQexec() call, and 
it isn’t clear that COPY is worth the effort in our application.   
I'm a bit confused here: How can you batch multiple INSERTs into large
statements for MySQL, but not batch multiple INSERTs into COPY statements for
PostgreSQL?

Anyhow, putting it all inside one transaction (or a few) is likely to help
quite a lot, but of course less when you have fsync=false. Bunding multiple
statements in each PQexec() call won't really give you that; you'll have to
tell the database so explicitly.

/* Steinar */ 
Thanks, Steinar. I don't think we would really run with fsync off, but I need to document the performance tradeoffs. You're right that my explanation was confusing; probably because I'm confused about how to use COPY! I could batch multiple INSERTS using COPY statements, I just don't see how to do it without adding another process to read from STDIN, since the application that is currently the database client is constructing rows on the fly. I would need to get those rows into some process's STDIN stream or into a server-side file before COPY could be used, right?

You're comment about bundling multiple statements in each PQexec() call seems to disagree with a statement in 27.3.1 that I interpret as saying each PQexec() call corresponds to a single transaction. Are you sure my interpretation is wrong?

Regards, Steve

Re: improving write performance for logging application

От
Steve Eckmann
Дата:
dlang wrote:
On Tue, 3 Jan 2006, Tom Lane wrote:
 
Steve Eckmann <eckmann@computer.org> writes:   
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in a single command; the rate for MyISAM tables improved to
about 2600 objects/second. PostgreSQL doesn't support that language
extension. Using the COPY command instead of INSERT might help, but
since rows are being generated on the fly, I don't see how to use COPY
without running a separate process that reads rows from the
application and uses COPY to write to the database.     
Can you conveniently alter your application to batch INSERT commands
into transactions?  Ie
BEGIN;INSERT ...;... maybe 100 or so inserts ...COMMIT;BEGIN;... lather, rinse, repeat ...

This cuts down the transactional overhead quite a bit.  A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.   
Steve, you mentioned that you data collector buffers the data before
sending it to the database, modify it so that each time it goes to send
things to the database you send all the data that's in the buffer as a
single transaction.

I am working on useing postgres to deal with log data and wrote a simple
perl script that read in the log files a line at a time, and then wrote
them 1000 at a time to the database. On a dual Opteron 240 box with 2G of
ram 1x 15krpm SCSI drive (and a untuned postgress install with the compile
time defaults) I was getting 5000-8000 lines/sec (I think this was with
fsync disabled, but I don't remember for sure). and postgres was
complaining that it was overrunning it's log sizes (which limits the speed
as it then has to pause to flush the logs)

the key thing is to send multiple lines with one transaction as tom shows
above.

David Lang
Thanks, David. I will look more carefully at how to batch multiple rows per PQexec() call.  Regards, Steve.

Re: improving write performance for logging application

От
Steve Eckmann
Дата:
Ian Westmacott wrote:

>We have a similar application thats doing upwards of 2B inserts
>per day.  We have spent a lot of time optimizing this, and found the
>following to be most beneficial:
>
>1)  use COPY (BINARY if possible)
>2)  don't use triggers or foreign keys
>3)  put WAL and tables on different spindles (channels if possible)
>4)  put as much as you can in each COPY, and put as many COPYs as
>    you can in a single transaction.
>5)  watch out for XID wraparound
>6)  tune checkpoint* and bgwriter* parameters for your I/O system
>
Thanks, Ian. I will look at how to implement your suggestions.

Regards, Steve

Re: improving write performance for logging

От
Ron
Дата:
2B is a lot of inserts.  If you had to guess,
what do you think is the maximum number of inserts you could do in a day?

How large is each record being inserted?

How much can you put in a COPY and how many COPYs
can you put into a transactions?

What values are you using for bgwriter* and checkpoint*?

What HW on you running on and what kind of performance do you typically get?

Inquiring minds definitely want to know ;-)
Ron


At 08:54 AM 1/4/2006, Ian Westmacott wrote:
>We have a similar application thats doing upwards of 2B inserts
>per day.  We have spent a lot of time optimizing this, and found the
>following to be most beneficial:
>
>1)  use COPY (BINARY if possible)
>2)  don't use triggers or foreign keys
>3)  put WAL and tables on different spindles (channels if possible)
>4)  put as much as you can in each COPY, and put as many COPYs as
>     you can in a single transaction.
>5)  watch out for XID wraparound
>6)  tune checkpoint* and bgwriter* parameters for your I/O system
>
>On Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:
> > I have questions about how to improve the
> write performance of PostgreSQL for logging
> data from a real-time simulation. We found that
> MySQL 4.1.3 could log about 1480 objects/second
> using MyISAM tables or about 1225
> objects/second using InnoDB tables, but
> PostgreSQL 8.0.3 could log only about 540
> objects/second. (test system: quad-Itanium2,
> 8GB memory, SCSI RAID, GigE connection from
> simulation server, nothing running except
> system processes and database system under test)
> >
> > We also found that we could improve MySQL
> performance significantly using MySQL's
> "INSERT" command extension allowing multiple
> value-list tuples in a single command; the rate
> for MyISAM tables improved to about 2600
> objects/second. PostgreSQL doesn't support that
> language extension. Using the COPY command
> instead of INSERT might help, but since rows
> are being generated on the fly, I don't see how
> to use COPY without running a separate process
> that reads rows from the application and uses
> COPY to write to the database. The application
> currently has two processes: the simulation and
> a data collector that reads events from the sim
> (queued in shared memory) and writes them as
> rows to the database, buffering as needed to
> avoid lost data during periods of high
> activity. To use COPY I think we would have to
> split our data collector into two processes communicating via a pipe.
> >
> > Query performance is not an issue: we found
> that when suitable indexes are added PostgreSQL
> is fast enough on the kinds of queries our
> users make. The crux is writing rows to the
> database fast enough to keep up with the simulation.
> >
> > Are there general guidelines for tuning the
> PostgreSQL server for this kind of application?
> The suggestions I've found include disabling
> fsync (done), increasing the value of
> wal_buffers, and moving the WAL to a different
> disk, but these aren't likely to produce the 3x
> improvement that we need. On the client side
> I've found only two suggestions: disable
> autocommit and use COPY instead of INSERT. I
> think I've effectively disabled autocommit by
> batching up to several hundred INSERT commands
> in each PQexec() call, and it isn’t clear
> that COPY is worth the effort in our application.
> >
> > Thanks.
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: Don't 'kill -9' the postmaster
>--
>Ian Westmacott <ianw@intellivid.com>
>Intellivid Corp.
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Have you searched our list archives?
>
>                http://archives.postgresql.org




Re: improving write performance for logging application

От
Tom Lane
Дата:
Steve Eckmann <eckmann@computer.org> writes:
> Thanks for the suggestion, Tom. Yes, I think I could do that. But I
> thought what I was doing now was effectively the same, because the
> PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
> include multiple SQL commands (separated by semicolons) in the command
> string. Multiple queries sent in a single PQexec call are processed in a
> single transaction...." Our simulation application has nearly 400 event
> types, each of which is a C++ class for which we have a corresponding
> database table. So every thousand events or so I issue one PQexec() call
> for each event type that has unlogged instances, sending INSERT commands
> for all instances. For example,

>     PQexec(dbConn, "INSERT INTO FlyingObjectState VALUES (...); INSERT
> INTO FlyingObjectState VALUES (...); ...");

Hmm.  I'm not sure if that's a good idea or not.  You're causing the
server to take 1000 times the normal amount of memory to hold the
command parsetrees, and if there are any O(N^2) behaviors in parsing
you could be getting hurt badly by that.  (I'd like to think there are
not, but would definitely not swear to it.)  OTOH you're reducing the
number of network round trips which is a good thing.  Have you actually
measured to see what effect this approach has?  It might be worth
building a test server with profiling enabled to see if the use of such
long command strings creates any hot spots in the profile.

            regards, tom lane

Re: improving write performance for logging application

От
Kelly Burkhart
Дата:
On 1/4/06, Steve Eckmann <eckmann@computer.org> wrote:
Thanks, Steinar. I don't think we would really run with fsync off, but I need to document the performance tradeoffs. You're right that my explanation was confusing; probably because I'm confused about how to use COPY! I could batch multiple INSERTS using COPY statements, I just don't see how to do it without adding another process to read from STDIN, since the application that is currently the database client is constructing rows on the fly. I would need to get those rows into some process's STDIN stream or into a server-side file before COPY could be used, right?

Steve,

You can use copy without resorting to another process.  See the libpq documentation for 'Functions Associated with the copy Command".  We do something like this:

char *mbuf;

// allocate space and fill mbuf with appropriately formatted data somehow

PQexec( conn, "begin" );
PQexec( conn, "copy mytable from stdin" );
PQputCopyData( conn, mbuf, strlen(mbuf) );
PQputCopyEnd( conn, NULL );
PQexec( conn, "commit" );

-K

Re: improving write performance for logging

От
Ian Westmacott
Дата:
On Wed, 2006-01-04 at 09:29 -0500, Ron wrote:
> 2B is a lot of inserts.  If you had to guess,
> what do you think is the maximum number of inserts you could do in a day?

It seems we are pushing it there.  Our intentions are to scale much
further, but our plans are to distribute at this point.

> How large is each record being inserted?

They are small: 32 (data) bytes.

> How much can you put in a COPY and how many COPYs
> can you put into a transactions?

These are driven by the application; we do about 60 COPYs and a couple
dozen INSERT/UPDATE/DELETEs in a single transaction.  Each COPY is
doing a variable number of rows, up to several hundred.  We do 15 of
these transactions per second.

> What values are you using for bgwriter* and checkpoint*?

bgwriter is 100%/500 pages, and checkpoint is 50 segments/300 seconds.
wal_buffers doesn't do much for us, and fsync is enabled.

> What HW on you running on and what kind of performance do you typically get?

The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
The tables are on a 10-spindle (SCSI) RAID50 with dual U320
controllers (XFS).  This is overkill for writing and querying the data,
but we need to constantly ANALYZE and VACUUM in the
background without interrupting the inserts (the app is 24x7).  The
databases are 4TB, so these operations can be lengthy.

--
Ian Westmacott <ianw@intellivid.com>
Intellivid Corp.


Re: improving write performance for logging application

От
Steve Eckmann
Дата:
Tom Lane wrote:
Steve Eckmann <eckmann@computer.org> writes: 
<>Thanks for the suggestion, Tom. Yes, I think I could do that. But I
thought what I was doing now was effectively the same, because the
PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
include multiple SQL commands (separated by semicolons) in the command
string. Multiple queries sent in a single PQexec call are processed in a
single transaction...." Our simulation application has nearly 400 event
types, each of which is a C++ class for which we have a corresponding
database table. So every thousand events or so I issue one PQexec() call
for each event type that has unlogged instances, sending INSERT commands
for all instances. For example,
    PQexec(dbConn, "INSERT INTO FlyingObjectState VALUES (...); INSERT 
INTO FlyingObjectState VALUES (...); ...");   
Hmm.  I'm not sure if that's a good idea or not.  You're causing the
server to take 1000 times the normal amount of memory to hold the
command parsetrees, and if there are any O(N^2) behaviors in parsing
you could be getting hurt badly by that.  (I'd like to think there are
not, but would definitely not swear to it.)  OTOH you're reducing the
number of network round trips which is a good thing.  Have you actually
measured to see what effect this approach has?  It might be worth
building a test server with profiling enabled to see if the use of such
long command strings creates any hot spots in the profile.
		regards, tom lane 
No, I haven't measured it. I will compare this approach with others that have been suggested. Thanks.  -steve

Re: improving write performance for logging application

От
Steve Eckmann
Дата:
Kelly Burkhart wrote:
On 1/4/06, Steve Eckmann <eckmann@computer.org> wrote:
Thanks, Steinar. I don't think we would really run with fsync off, but I need to document the performance tradeoffs. You're right that my explanation was confusing; probably because I'm confused about how to use COPY! I could batch multiple INSERTS using COPY statements, I just don't see how to do it without adding another process to read from STDIN, since the application that is currently the database client is constructing rows on the fly. I would need to get those rows into some process's STDIN stream or into a server-side file before COPY could be used, right?

Steve,

You can use copy without resorting to another process.  See the libpq documentation for 'Functions Associated with the copy Command".  We do something like this:

char *mbuf;

// allocate space and fill mbuf with appropriately formatted data somehow

PQexec( conn, "begin" );
PQexec( conn, "copy mytable from stdin" );
PQputCopyData( conn, mbuf, strlen(mbuf) );
PQputCopyEnd( conn, NULL );
PQexec( conn, "commit" );

-K
Thanks for the concrete example, Kelly. I had read the relevant libpq doc but didn't put the pieces together.

Regards,  Steve

Re: improving write performance for logging

От
"Jim C. Nasby"
Дата:
On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> The tables are on a 10-spindle (SCSI) RAID50 with dual U320
> controllers (XFS).  This is overkill for writing and querying the data,
> but we need to constantly ANALYZE and VACUUM in the
> background without interrupting the inserts (the app is 24x7).  The
> databases are 4TB, so these operations can be lengthy.

How come you're using RAID50 instead of just RAID0? Or was WAL being on
RAID0 a typo?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: improving write performance for logging

От
Ian Westmacott
Дата:
On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:
> On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> > The tables are on a 10-spindle (SCSI) RAID50 with dual U320
> > controllers (XFS).  This is overkill for writing and querying the data,
> > but we need to constantly ANALYZE and VACUUM in the
> > background without interrupting the inserts (the app is 24x7).  The
> > databases are 4TB, so these operations can be lengthy.
>
> How come you're using RAID50 instead of just RAID0? Or was WAL being on
> RAID0 a typo?

We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
We use RAID0 for the WAL for performance.

I'm missing the implication of the question...

--
Ian Westmacott <ianw@intellivid.com>
Intellivid Corp.


Re: improving write performance for logging

От
"Jim C. Nasby"
Дата:
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
> On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:
> > On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:
> > > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).
> > > The tables are on a 10-spindle (SCSI) RAID50 with dual U320
> > > controllers (XFS).  This is overkill for writing and querying the data,
> > > but we need to constantly ANALYZE and VACUUM in the
> > > background without interrupting the inserts (the app is 24x7).  The
> > > databases are 4TB, so these operations can be lengthy.
> >
> > How come you're using RAID50 instead of just RAID0? Or was WAL being on
> > RAID0 a typo?
>
> We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
> We use RAID0 for the WAL for performance.
>
> I'm missing the implication of the question...

The problem is that if you lose WAL or the data, you've lost everything.
So you might as well use raid0 for the data if you're using it for WAL.
Or switch WAL to raid1. Actually, a really good controller *might* be
able to do a good job of raid5 for WAL. Or just use raid10.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: improving write performance for logging

От
Ian Westmacott
Дата:
On Fri, 2006-01-06 at 10:37 -0600, Jim C. Nasby wrote:
> The problem is that if you lose WAL or the data, you've lost everything.
> So you might as well use raid0 for the data if you're using it for WAL.
> Or switch WAL to raid1. Actually, a really good controller *might* be
> able to do a good job of raid5 for WAL. Or just use raid10.

If the WAL is lost, can you lose more than the data since the last
checkpoint?

--
Ian Westmacott <ianw@intellivid.com>
Intellivid Corp.


Re: improving write performance for logging

От
Tom Lane
Дата:
Ian Westmacott <ianw@intellivid.com> writes:
> If the WAL is lost, can you lose more than the data since the last
> checkpoint?

The problem is that you might have partially-applied actions since the
last checkpoint, rendering your database inconsistent; then *all* the
data is suspect if not actually corrupt.

            regards, tom lane

Re: improving write performance for logging

От
Michael Stone
Дата:
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
>We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
>We use RAID0 for the WAL for performance.
>
>I'm missing the implication of the question...

If you have the WAL on RAID 0 you have no fault tolerance, regardless of
what level you use for the tables.

Mike Stone