Обсуждение: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

Поиск
Список
Период
Сортировка

Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Stephane Bailliez
Дата:
I'm trying to run a few basic tests to see what a current machine can
deliver (typical workload ETL like, long running aggregate queries,
medium size db ~100 to 200GB).

I'm currently checking the system (dd, bonnie++) to see if performances
are within the normal range but I'm having trouble relating it to
anything known. Scouting the archives there are more than a few people
familiar with it, so if someone can have a look at those numbers and
raise a flag where some numbers look very out of range for such system,
that would be appreciated. I also added some raw pgbench numbers at the end.

(Many thanks to Greg Smith, his pages was extremely helpful to get
started. Any mistake is mine)

Hardware:

Sun Fire X4150 x64

2 Quad-Core Intel(R) Xeon(R) X5460 processor (2x6MB L2, 3.16 GHz, 1333
MHz FSB)
16GB of memory (4x2GB PC2-5300 667 MHz ECC fully buffered DDR2 DIMMs)

6x 146GB 10K RPM SAS  in RAID10 - for os + data
2x 146GB 10K RPM SAS  in RAID1 - for xlog
Sun StorageTek SAS HBA Internal (Adaptec AAC-RAID)


OS is Ubuntu 7.10 x86_64 running  2.6.22-14
os in on ext3
data is on xfs noatime
xlog is on ext2 noatime


data
$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync"
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 152.359 seconds, 215 MB/s

real    2m36.895s
user    0m0.570s
sys     0m36.520s

$ time dd if=bigfile of=/dev/null bs=8k
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 114.723 seconds, 286 MB/s

real    1m54.725s
user    0m0.450s
sys     0m22.060s


xlog
$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync"
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 389.216 seconds, 84.2 MB/s

real    6m50.155s
user    0m0.420s
sys     0m26.490s

$ time dd if=bigfile of=/dev/null bs=8k
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 294.556 seconds, 111 MB/s

real    4m54.558s
user    0m0.430s
sys     0m23.480s



bonnie++ -s 32g -n 256

data:
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
lid-statsdb-1   32G 101188  98 202523  20 107642  13 88931  88 271576
19 980.7   2
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                256 11429  93 +++++ +++ 17492  71 11097  91 +++++ +++
2473  11



xlog
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
lid-statsdb-1   32G 62973  59 69981   5 35433   4 87977  85 119749   9
496.2   1
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                256   551  99 +++++ +++ 300935  99   573  99 +++++ +++
1384  99

pgbench

postgresql 8.2.9 with data and xlog as mentioned above

postgresql.conf:
shared_buffers = 4GB
checkpoint_segments = 8
effective_cache_size = 8GB

Script running over scaling factor 1 to 1000 and running 3 times pgbench
with "pgbench -t 2000 -c 8 -S pgbench"

It's a bit limited and will try to do a much much longer run and
increase the # of tests and calculate mean and stddev as I have a pretty
large variation for the 3 runs sometimes (typically for the scaling
factor at 1000, the runs are respectively 1952, 940, 3162)  so the graph
is pretty ugly.

I get (scaling factor, size of db in MB, middle tps)

1 20 22150
5 82 22998
10 160 22301
20 316 22857
30 472 23012
40 629 17434
50 785 22179
100 1565 20193
200 3127 23788
300 4688 15494
400 6249 23513
500 7810 18868
600 9372 22146
700 11000 14555
800 12000 10742
900 14000 13696
1000 15000 940

cheers,

-- stephane

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
"Luke Lonergan"
Дата:

pgbench is unrelated to the workload you are concerned with if ETL/ELT and decision support / data warehousing queries are your target.

Also - placing the xlog on dedicated disks is mostly irrelevant to data warehouse / decision support work or ELT.  If you need to maximize loading speed while concurrent queries are running, it may be necessary, but I think you'll be limited in load speed by CPU related to data formatting anyway.

The primary performance driver for ELT / DW is sequential transfer rate, thus the dd test at 2X memory.  With six data disks of this type, you should expect a maximum of around 6 x 80 = 480 MB/s.  With RAID10, depending on the raid adapter, you may need to have two or more IO streams to use all platters, otherwise your max speed for one query would be 1/2 that, or 240 MB/s.

I'd suggest RAID5, or even better, configure all eight disks as a JBOD in the RAID adapter and run ZFS RAIDZ.  You would then expect to get about 7 x 80 = 560 MB/s on your single query.

That said, your single cpu on one query will only be able to scan that data at about 300 MB/s (try running a SELECT COUNT(*) against a table that is 2X memory size).

- Luke

----- Original Message -----
From: pgsql-performance-owner@postgresql.org <pgsql-performance-owner@postgresql.org>
To: pgsql-performance@postgresql.org <pgsql-performance@postgresql.org>
Sent: Sat Jul 19 09:19:43 2008
Subject: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)


I'm trying to run a few basic tests to see what a current machine can
deliver (typical workload ETL like, long running aggregate queries,
medium size db ~100 to 200GB).

I'm currently checking the system (dd, bonnie++) to see if performances
are within the normal range but I'm having trouble relating it to
anything known. Scouting the archives there are more than a few people
familiar with it, so if someone can have a look at those numbers and
raise a flag where some numbers look very out of range for such system,
that would be appreciated. I also added some raw pgbench numbers at the end.

(Many thanks to Greg Smith, his pages was extremely helpful to get
started. Any mistake is mine)

Hardware:

Sun Fire X4150 x64

2 Quad-Core Intel(R) Xeon(R) X5460 processor (2x6MB L2, 3.16 GHz, 1333
MHz FSB)
16GB of memory (4x2GB PC2-5300 667 MHz ECC fully buffered DDR2 DIMMs)

6x 146GB 10K RPM SAS  in RAID10 - for os + data
2x 146GB 10K RPM SAS  in RAID1 - for xlog
Sun StorageTek SAS HBA Internal (Adaptec AAC-RAID)


OS is Ubuntu 7.10 x86_64 running  2.6.22-14
os in on ext3
data is on xfs noatime
xlog is on ext2 noatime


data
$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync"
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 152.359 seconds, 215 MB/s

real    2m36.895s
user    0m0.570s
sys     0m36.520s

$ time dd if=bigfile of=/dev/null bs=8k
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 114.723 seconds, 286 MB/s

real    1m54.725s
user    0m0.450s
sys     0m22.060s


xlog
$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync"
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 389.216 seconds, 84.2 MB/s

real    6m50.155s
user    0m0.420s
sys     0m26.490s

$ time dd if=bigfile of=/dev/null bs=8k
4000000+0 records in
4000000+0 records out
32768000000 bytes (33 GB) copied, 294.556 seconds, 111 MB/s

real    4m54.558s
user    0m0.430s
sys     0m23.480s



bonnie++ -s 32g -n 256

data:
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
/sec %CP
lid-statsdb-1   32G 101188  98 202523  20 107642  13 88931  88 271576 
19 980.7   2
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
/sec %CP
                256 11429  93 +++++ +++ 17492  71 11097  91 +++++ +++ 
2473  11



xlog
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
/sec %CP
lid-statsdb-1   32G 62973  59 69981   5 35433   4 87977  85 119749   9
496.2   1
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
/sec %CP
                256   551  99 +++++ +++ 300935  99   573  99 +++++ +++ 
1384  99

pgbench

postgresql 8.2.9 with data and xlog as mentioned above

postgresql.conf:
shared_buffers = 4GB
checkpoint_segments = 8
effective_cache_size = 8GB

Script running over scaling factor 1 to 1000 and running 3 times pgbench
with "pgbench -t 2000 -c 8 -S pgbench"

It's a bit limited and will try to do a much much longer run and
increase the # of tests and calculate mean and stddev as I have a pretty
large variation for the 3 runs sometimes (typically for the scaling
factor at 1000, the runs are respectively 1952, 940, 3162)  so the graph
is pretty ugly.

I get (scaling factor, size of db in MB, middle tps)

1 20 22150
5 82 22998
10 160 22301
20 316 22857
30 472 23012
40 629 17434
50 785 22179
100 1565 20193
200 3127 23788
300 4688 15494
400 6249 23513
500 7810 18868
600 9372 22146
700 11000 14555
800 12000 10742
900 14000 13696
1000 15000 940

cheers,

-- stephane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Greg Smith
Дата:
On Sat, 19 Jul 2008, Stephane Bailliez wrote:

> OS is Ubuntu 7.10 x86_64 running  2.6.22-14

Note that I've had some issues with the desktop Ubuntu giving slower
results in tests like this than the same kernel release using the stock
kernel parameters.  Haven't had a chance yet to see how the server Ubuntu
kernel fits into that or exactly what the desktop one is doing wrong yet.
Could be worse--if you were running any 8.04 I expect your pgbench results
would be downright awful.

> data is on xfs noatime

While XFS has some interesting characteristics, make sure you're
comfortable with the potential issues the journal approach used by that
filesystem has.  With ext3, you can choose the somewhat risky writeback
behavior or not, you're stuck with it in XFS as far as I know.  A somewhat
one-sided intro here is at
http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc

> postgresql 8.2.9 with data and xlog as mentioned above

There are so many known performance issues in 8.2 that are improved in 8.3
that I'd suggest you really should be considering it for a new install at
this point.

> Script running over scaling factor 1 to 1000 and running 3 times pgbench with
> "pgbench -t 2000 -c 8 -S pgbench"

In general, you'll want to use a couple of clients per CPU core for
pgbench tests to get a true look at the scalability.  Unfortunately, the
way the pgbench client runs means that it tends to top out at 20 or 30
thousand TPS on read-only tests no matter how many cores you have around.
But you may find operations where peak throughput comes at closer to 32
clients here rather than just 8.

> It's a bit limited and will try to do a much much longer run and increase the
> # of tests and calculate mean and stddev as I have a pretty large variation
> for the 3 runs sometimes (typically for the scaling factor at 1000, the runs
> are respectively 1952, 940, 3162)  so the graph is pretty ugly.

This is kind of a futile exercise and I wouldn't go crazy trying to
analyze here.  Having been through that many times, I predict you'll
discover no real value to a more statistically intense analysis.  It's not
like sampling at more points makes the variation go away, or that the
variation itself has some meaning worth analyzing.  Really the goal of
pgbench tests should be look at a general trend.  Looking at your data for
example, I'd say the main useful observation to draw from your tests is
that performance is steady then drops off sharply once the database itself
exceeds 10GB, which is a fairly positive statement that you're getting
something out of most of the the 16GB of RAM in the server during this
test.

As far as the rest of your results go, Luke's comment that you may need
more than one process to truly see the upper limit of your disk
performance is right on target.  More useful commentary on that issue I'd
recomend is near the end of

http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/

(man does that need to be a smaller URL)

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Stephane Bailliez
Дата:
Luke Lonergan wrote:
>
> pgbench is unrelated to the workload you are concerned with if ETL/ELT
> and decision support / data warehousing queries are your target.
>
> Also - placing the xlog on dedicated disks is mostly irrelevant to
> data warehouse / decision support work or ELT.  If you need to
> maximize loading speed while concurrent queries are running, it may be
> necessary, but I think you'll be limited in load speed by CPU related
> to data formatting anyway.
>
Indeed. pgbench was mostly done as 'informative' and not really relevant
to the future workload of this db. (given the queries it's doing not
sure it's relevant for anything but connections speed,
interesting for me to get reference for tx like however). I was more
interested in the raw disk performance.

>
> The primary performance driver for ELT / DW is sequential transfer
> rate, thus the dd test at 2X memory.  With six data disks of this
> type, you should expect a maximum of around 6 x 80 = 480 MB/s.  With
> RAID10, depending on the raid adapter, you may need to have two or
> more IO streams to use all platters, otherwise your max speed for one
> query would be 1/2 that, or 240 MB/s.
>
ok, which seems to be in par with what I'm getting. (the 240 that is)

>
> I'd suggest RAID5, or even better, configure all eight disks as a JBOD
> in the RAID adapter and run ZFS RAIDZ.  You would then expect to get
> about 7 x 80 = 560 MB/s on your single query.
>
Do you have a particular controller and disk hardware configuration in
mind when you're suggesting RAID5 ?
My understanding was it was more difficult to find the right hardware to
get performance on RAID5 compared to RAID10.

>
> That said, your single cpu on one query will only be able to scan that
> data at about 300 MB/s (try running a SELECT COUNT(*) against a table
> that is 2X memory size).
>
Note quite 2x memory size, but ~26GB (accounts with scaling factor 2000):

$ time psql -c "select count(*) from accounts" pgbench
   count
-----------
 200000000
(1 row)

real    1m52.050s
user    0m0.020s
sys     0m0.020s


NB: For the sake of completness, reran the pgbench by taking average of
10 runs for each scaling factor (same configuration as per initial mail,
columns are scaling factor, db size, average tps)

1 20 23451
100 1565 21898
200 3127 20474
300 4688 20003
400 6249 20637
500 7810 16434
600 9372 15114
700 11000 14595
800 12000 16090
900 14000 14894
1000 15000 3071
1200 18000 3382
1400 21000 1888
1600 24000 1515
1800 27000 1435
2000 30000 1354

-- stephane

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Luke Lonergan
Дата:
Hi Stephane,

On 7/21/08 1:53 AM, "Stephane Bailliez" <sbailliez@gmail.com> wrote:

>> I'd suggest RAID5, or even better, configure all eight disks as a JBOD
>> in the RAID adapter and run ZFS RAIDZ.  You would then expect to get
>> about 7 x 80 = 560 MB/s on your single query.
>>
> Do you have a particular controller and disk hardware configuration in
> mind when you're suggesting RAID5 ?
> My understanding was it was more difficult to find the right hardware to
> get performance on RAID5 compared to RAID10.

If you're running RAIDZ on ZFS, the controller you have should be fine.
Just configure the HW RAID controller to treat the disks as JBOD (eight
individual disks), then make a single RAIDZ zpool of the eight disks.  This
will run them in a robust SW RAID within Solaris.  The fault management is
superior to what you would otherwise have in your HW RAID and the
performance should be much better.

- Luke


Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Stephane Bailliez
Дата:
Greg Smith wrote:
>
> Note that I've had some issues with the desktop Ubuntu giving slower
> results in tests like this than the same kernel release using the
> stock kernel parameters.  Haven't had a chance yet to see how the
> server Ubuntu kernel fits into that or exactly what the desktop one is
> doing wrong yet. Could be worse--if you were running any 8.04 I expect
> your pgbench results would be downright awful.

Ah interesting. Isn't it a scheduler problem, I thought CFQ was the
default for desktop ?
I doublechecked the 7.10 server on this box and it's really the deadline
one that is used:

cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq

Do you have some more pointers on the 8.04 issues you mentioned ?
(that's deemed to be the next upgrade from ops)

>> postgresql 8.2.9 with data and xlog as mentioned above
> There are so many known performance issues in 8.2 that are improved in
> 8.3 that I'd suggest you really should be considering it for a new
> install at this point.

Yes I'd definitely prefer to go 8.3 as well but there are a couple
reasons for now I have to suck it up:
- 8.2 is the one in the 7.10 repository.
- I need plr as well and 8.3-plr debian package does not exist yet.

(I know in both cases we could recompile and install it from there, but ...)

> In general, you'll want to use a couple of clients per CPU core for
> pgbench tests to get a true look at the scalability.  Unfortunately,
> the way the pgbench client runs means that it tends to top out at 20
> or 30 thousand TPS on read-only tests no matter how many cores you
> have around. But you may find operations where peak throughput comes
> at closer to 32 clients here rather than just 8.
ok. Make sense.

> As far as the rest of your results go, Luke's comment that you may
> need more than one process to truly see the upper limit of your disk
> performance is right on target.  More useful commentary on that issue
> I'd recomend is near the end of
>
http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/

>
Yeah I was looking at that url as well. Very useful.

Thanks for all the info Greg.

-- stephane


Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Greg Smith
Дата:
On Mon, 21 Jul 2008, Stephane Bailliez wrote:

> Isn't it a scheduler problem, I thought CFQ was the default for desktop
> ?

CFQ/Deadline/AS are I/O scheduler choices.  What changed completely in
2.6.23 is the kernel process scheduler.
http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt gives
some info about the new one.

While the switch to CFS has shown great improvements in terms of desktop
and many server workloads, what I discovered is that the pgbench test
program itself is really incompatible with it.  There's a kernel patch
that seems to fix the problem at http://lkml.org/lkml/2008/5/27/58 but I
don't think it's made it into a release yet.

This is not to say the kernel itself is unsuitable for running PostgreSQL
itself, but if you're using pgbench as the program to confirm that I
expect you'll be dissapointed with results under the Ubuntu 8.04 kernel.
It tops out at around 10,000 TPS running the select-only test for me while
older kernels did 3X that much.

> Yes I'd definitely prefer to go 8.3 as well but there are a couple reasons
> for now I have to suck it up:
> - 8.2 is the one in the 7.10 repository.
> - I need plr as well and 8.3-plr debian package does not exist yet.
> (I know in both cases we could recompile and install it from there, but ...)

Stop and think about this for a minute.  You're going into production with
an older version having a set of known, impossible to work around issues
that if you hit them the response will be "upgrade to 8.3 to fix that",
which will require the major disruption to your application of a database
dump and reload at that point if that fix becomes critical.  And you can't
just do that now because of some packaging issues?  I hope you can impress
upon the other people involved how incredibly short-sighted that is.

Unfortunately, it's harder than everyone would like to upgrade an existing
PostgreSQL installation.  That really argues for going out of your way ir
necessary to deploy the latest stable release when you're building
something new, if there's not some legacy bits seriously holding you back.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Emil Pedersen
Дата:
[...]

> Yes I'd definitely prefer to go 8.3 as well but there are a couple
> reasons for now I have to suck it up:
> - 8.2 is the one in the 7.10 repository.
> - I need plr as well and 8.3-plr debian package does not exist yet.
>
> (I know in both cases we could recompile and install it from there,
> but ...)

At least on debian it was quite easy to "backport" 8.3.3 from sid
to etch using apt-get's source and build-dep functions.  That way
you get a normal installable package.

I'm not sure, but given the similarity I would guess it won't be
much harder on ubuntu.

// Emil


Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Emil Pedersen
Дата:

--On tisdag, juli 22, 2008 01.20.52 +0200 Emil Pedersen
<emil.pedersen@its.uu.se> wrote:

>
> [...]
>
>> Yes I'd definitely prefer to go 8.3 as well but there are a couple
>> reasons for now I have to suck it up:
>> - 8.2 is the one in the 7.10 repository.
>> - I need plr as well and 8.3-plr debian package does not exist yet.
>>
>> (I know in both cases we could recompile and install it from there,
>> but ...)
>
> At least on debian it was quite easy to "backport" 8.3.3 from sid
> to etch using apt-get's source and build-dep functions.  That way
> you get a normal installable package.
>
> I'm not sure, but given the similarity I would guess it won't be
> much harder on ubuntu.

I should have said that I was talking about the postgresql, I
missed the plr part.  I appologize for the noice.

// Emil


Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Tom Lane
Дата:
Emil Pedersen <emil.pedersen@its.uu.se> writes:
>> At least on debian it was quite easy to "backport" 8.3.3 from sid
>> to etch using apt-get's source and build-dep functions.  That way
>> you get a normal installable package.

> I should have said that I was talking about the postgresql, I
> missed the plr part.  I appologize for the noice.

Still, there's not normally that much difference between the packaging
for one version and for the next.  I can't imagine that it would take
much time to throw together a package for 8.3 plr based on what you're
using for 8.2.  All modern package-based distros make this pretty easy.
The only reason not to do it would be if you're buying support from
a vendor who will only support specific package versions...

            regards, tom lane

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Stephane Bailliez
Дата:
Greg Smith wrote:
> CFQ/Deadline/AS are I/O scheduler choices.  What changed completely in
> 2.6.23 is the kernel process scheduler.
> http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt
> gives some info about the new one.
>
> While the switch to CFS has shown great improvements in terms of
> desktop and many server workloads, what I discovered is that the
> pgbench test program itself is really incompatible with it.  There's a
> kernel patch that seems to fix the problem at
> http://lkml.org/lkml/2008/5/27/58 but I don't think it's made it into
> a release yet.
>
> This is not to say the kernel itself is unsuitable for running
> PostgreSQL itself, but if you're using pgbench as the program to
> confirm that I expect you'll be dissapointed with results under the
> Ubuntu 8.04 kernel. It tops out at around 10,000 TPS running the
> select-only test for me while older kernels did 3X that much.

ok, thanks for all the details. good to know.

> Stop and think about this for a minute.  You're going into production
> with an older version having a set of known, impossible to work around
> issues that if you hit them the response will be "upgrade to 8.3 to
> fix that", which will require the major disruption to your application
> of a database dump and reload at that point if that fix becomes
> critical.  And you can't just do that now because of some packaging
> issues?  I hope you can impress upon the other people involved how
> incredibly short-sighted that is.

I understand what you're saying. However if I were to play devil's
advocate, the existing one that I'm 'migrating' (read entirely changing
schemas, 'migrating' data) is coming out from a 8.1.11 install. It is
not a critical system. The source data is always available from another
system and the postgresql system would be a 'client'. So if 8.2.x is so
abysmal it should not even be considered for install compared to 8.1.x
and that only 8.3.x is viable then ok that makes sense and I have to go
the extra mile.

But message received loud and clear. Conveniently 8.3.3 is also
available on backports so it does not cost much and pinning it will be
and pinning it is right now. (don't think there will be any pb with plr,
even though the original seems to be patched a bit, but that will be for
later when I don't know what to do and that all is ready).

For the sake of completeness (even though irrelevant), here's the run
with 32 clients on 8.3 same config as before (except max_fsm_pages at
204800)

1 19 36292
100 1499 32127
200 2994 30679
300 4489 29673
400 5985 18627
500 7480 19714
600 8975 19437
700 10000 20271
800 12000 18038
900 13000 9842
1000 15000 5996
1200 18000 5404
1400 20000 3701
1600 23000 2877
1800 26000 2657
2000 29000 2612

cheers,

-- stephane

Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)

От
Greg Smith
Дата:
On Tue, 22 Jul 2008, Stephane Bailliez wrote:

> I'm 'migrating' (read entirely changing schemas, 'migrating' data) is
> coming out from a 8.1.11 install. It is not a critical system. The
> source data is always available from another system and the postgresql
> system would be a 'client'. So if 8.2.x is so abysmal it should not even
> be considered for install compared to 8.1.x and that only 8.3.x is
> viable then ok that makes sense and I have to go the extra mile.

8.2 is a big improvement over the 8.1 you're on now, and 8.3 is a further
improvement.  If the system isn't critical, it doesn't sound like doing a
later 8.2->8.3 upgrade (rather than going right to 8.3 now) will be a big
deal for you.  Just wanted you to be aware that upgrading larger installs
gets tricky sometimes, so it's best to avoid that if you could just do
more up-front work instead to start on a later version if practical.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD