Обсуждение: Specifications for a new server

Поиск
Список
Период
Сортировка

Specifications for a new server

От
Johann Spies
Дата:
I am busy reading Gregory Smith' s  PostgreSQL 9.0 High Performance and when the book was written he seemed to me a bit sceptical about SSD's.  I suspect the reliability of the SSD's has improved significantly since then.

Our present server (128Gb RAM and 2.5 Tb disk space  and 12 CPU cores - RAID 10) will become a development server and we are going to buy a new server.

At the moment the 'base'  directory uses 1.5Tb  of disk space and there is still more data to come.

The database contains blbliometric data that receive updates on a weekly basis but not much changes other than that except for cleaning of data by a few persons.

Some of the queries can take many hours to finish.

On our present system there are sometimes more than 300GB in temporary files which I suspect will not be the case on the new system with a much larger RAM.

Analysis or the SAR-logs showed that there were too much iowait in the CPU's on the old system which has a lower spec CPU than the ones considered for the new system.

We are looking possibly the following hardware:

CPU: 2 x  Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores
RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS  - 768Gb

with enough disk space - about 4.8 Tb on RAID 10.
My question is about the possible advantage and usage of SSD disks in the new server.  At the moment I am considering using 2 x 200GB SSD' s for a separate partion for temporary files and 2 x 100GB for the operating system.

So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)

Re: Specifications for a new server

От
DFE
Дата:
I can suggest to have a disks' layout using at least two RAIDs:

1) RAID10 SSD (or 15kRPM HDD) SAS for O.S. and "pg_xlog" folder where PG writes WAL files before checkpoint calls.
2) RAID10 using how many span is possible for the default DB folder.


Regards,


2014-05-06 11:13 GMT+02:00 Johann Spies <johann.spies@gmail.com>:
I am busy reading Gregory Smith' s  PostgreSQL 9.0 High Performance and when the book was written he seemed to me a bit sceptical about SSD's.  I suspect the reliability of the SSD's has improved significantly since then.

Our present server (128Gb RAM and 2.5 Tb disk space  and 12 CPU cores - RAID 10) will become a development server and we are going to buy a new server.

At the moment the 'base'  directory uses 1.5Tb  of disk space and there is still more data to come.

The database contains blbliometric data that receive updates on a weekly basis but not much changes other than that except for cleaning of data by a few persons.

Some of the queries can take many hours to finish.

On our present system there are sometimes more than 300GB in temporary files which I suspect will not be the case on the new system with a much larger RAM.

Analysis or the SAR-logs showed that there were too much iowait in the CPU's on the old system which has a lower spec CPU than the ones considered for the new system.

We are looking possibly the following hardware:

CPU: 2 x  Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores
RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS  - 768Gb

with enough disk space - about 4.8 Tb on RAID 10.
My question is about the possible advantage and usage of SSD disks in the new server.  At the moment I am considering using 2 x 200GB SSD' s for a separate partion for temporary files and 2 x 100GB for the operating system.

So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)

Re: Specifications for a new server

От
Michael Stone
Дата:
On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:
>Analysis or the SAR-logs showed that there were too much iowait in the CPU's on
>the old system which has a lower spec CPU than the ones considered for the new
>system.

iowait means the cpu is doing nothing but waiting for data from the
disk. buying faster cpus means that they will be able to spend more time
waiting for data from the disk. you'd probably get much better bang for
the buck upgrading the storage subsystem than throwing more money at
cpus.

>We are looking possibly the following hardware:
>
>CPU: 2 x  Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores
>RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS  - 768Gb
>
>with enough disk space - about 4.8 Tb on RAID 10.
>My question is about the possible advantage and usage of SSD disks in the new
>server. 

>At the moment I am considering using 2 x 200GB SSD' s for a separate
>partion for temporary files and 2 x 100GB for the operating system.

If you're talking about SSDs for the OS, that's a complete waste; there
is essentially no I/O relating to the OS once you've booted.

>So my questions:
>
>1. Will the SSD's in this case be worth the cost?
>2.  What will the best way to utilize them in the system?

The best way to utilize them would probably be to spend less on the CPU
and RAM and more on the storage, and use SSD either for all of the
storage or for specific items that have a high level of I/O (such as the
indexes). Can't be more specific than that without a lot more
information about the database, how it is utilized, and what's actually
slow.

Mike Stone


Re: Specifications for a new server

От
Dorian Hoxha
Дата:
Since the commitlog/WAL is sequential-write, does it mattert that much to put it in ssd ?(i understand that it matters to put it in separate disk-subsystem so the write/read patterns don't interfere)


On Tue, May 6, 2014 at 1:07 PM, Michael Stone <mstone+postgres@mathom.us> wrote:
On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:
Analysis or the SAR-logs showed that there were too much iowait in the CPU's on
the old system which has a lower spec CPU than the ones considered for the new
system.

iowait means the cpu is doing nothing but waiting for data from the disk. buying faster cpus means that they will be able to spend more time waiting for data from the disk. you'd probably get much better bang for the buck upgrading the storage subsystem than throwing more money at cpus.


We are looking possibly the following hardware:

CPU: 2 x  Ivy Bridge 8C E5-2667V2 3.3G 25M 8GT/s QPI - 16 cores
RAM: 24 x 32GB DDR3-1866 2Rx4 LP ECC REG RoHS  - 768Gb

with enough disk space - about 4.8 Tb on RAID 10.
My question is about the possible advantage and usage of SSD disks in the new
server. 

At the moment I am considering using 2 x 200GB SSD' s for a separate
partion for temporary files and 2 x 100GB for the operating system.

If you're talking about SSDs for the OS, that's a complete waste; there is essentially no I/O relating to the OS once you've booted.


So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

The best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.

Mike Stone


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Specifications for a new server

От
Michael Stone
Дата:
On Tue, May 06, 2014 at 01:15:10PM +0200, Dorian Hoxha wrote:
>Since the commitlog/WAL is sequential-write, does it mattert that much to put
>it in ssd

No, assuming a good storage system with nvram write buffer.

Mike Stone


Re: Specifications for a new server

От
Dorian Hoxha
Дата:
And nvram is ram on hardware-raid controller that is not erased on reboot(battery) ?
Can this nvram be used also when the configuration is jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is required to use the nvram.


On Tue, May 6, 2014 at 1:24 PM, Michael Stone <mstone+postgres@mathom.us> wrote:
On Tue, May 06, 2014 at 01:15:10PM +0200, Dorian Hoxha wrote:
Since the commitlog/WAL is sequential-write, does it mattert that much to put
it in ssd

No, assuming a good storage system with nvram write buffer.
Mike Stone


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Specifications for a new server

От
Tomas Vondra
Дата:
On 6.5.2014 16:43, Dorian Hoxha wrote:
> And nvram is ram on hardware-raid controller that is not erased on
> reboot(battery) ?

Yes.

> Can this nvram be used also when the configuration is
> jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is
> required to use the nvram.

That probably depends on the controller used. Some controllers (e.g.
Dell Perc H700 and such) don't even support JBOD - an easy way around
this is create single-drive RAID0 arrays (never tried it, though).

Why do you even want to use JBOD?

regards
Tomas


Re: [PERFORM] Specifications for a new server

От
"r.etzenhammer@t-online.de"
Дата:
Hello,

here I am - the lonely user of JBOD :-) I use exactly what you describe below, we have a couple of Dell Servers with
batterybacked H700 controllers. On one we let the controller do a RAID10 on others I use the single disks (as you
describein useless RAID0s).  

What we do then is use those disks in a Solaris ZFS Pool as filesystem for our databases. The reason is, that with ZFS
(maybethere are other fs around there which can do this, but far back in good old Sun-times it was the only one) I can
buildup a pool and mix nearline SAS with SSDs in a neat way. ZFS has the concept of an ZFS Intent Log (ZIL), where
writesgo first before migrated to the pool - and you can put them on seperate disks. And you have the adaptive
replacementcache (ARC) you can also put on seperate disks, where hot data is cached. Now guess where we use the
expensiveSSDs. 

It boils down to this:

- slow mechanical drives backed with nvram controller
- SSD expecially for the ARC (and ZIL)
- RAM for ZFS further caching

On my very first try when ZIL and ARC approached in Solaris my first idea was just put one SSD splitted to two
partitionsas ARC and ZIL to our exisiting database. I didn't change anything else  (to be fair, that was a development
systemwith out nvram controller) - gave me a boost in performace of about 10. 

To be a bit more fair (I can, as I do not work for Sun - ehm Oracle): If this would work for someone else, the usage
patternshould be kept in mind. Also setting and tuning a Solaris server is of some more work and ZFS itself uses a good
bunchof CPU and RAM (we leave 8G for ZFS) and one would need much testing and tuning. 

A positive effect we use is for backing up data - since ZFS offers atomic snapshots, we just snapshot the filesystem.
Thedatafiles are guaranteed to be vaild as the WAL. A restore is to put the files back into place and restart Postgres
-it will run a recovery and replay the WAL and everything is done. We used that to provide back in time recovery and
warmstandby with old postgres installations before replication was included into core. 

Cheers,
Roland



> Can this nvram be used also when the configuration is
> jbod(justabunchofdisks) or some kind of raid(0/1/5/6/10 etc) is
> required to use the nvram.

That probably depends on the controller used. Some controllers (e.g.
Dell Perc H700 and such) don't even support JBOD - an easy way around
this is create single-drive RAID0 arrays (never tried it, though).

Why do you even want to use JBOD?



----------------------------------------------------------------
E-Mail-Postfach voll? Jetzt kostenlos E-Mail-Adresse @t-online.de sichern und endlich Platz für tausende E-Mails haben.
http://www.t-online.de/email-kostenlos




Re: Specifications for a new server

От
Johann Spies
Дата:
On 6 May 2014 13:07, Michael Stone <mstone+postgres@mathom.us> wrote:
On Tue, May 06, 2014 at 11:13:42AM +0200, Johann Spies wrote:
Analysis or the SAR-logs showed that there were too much iowait in the CPU's on
the old system which has a lower spec CPU than the ones considered for the new
system.

iowait means the cpu is doing nothing but waiting for data from the disk. buying faster cpus means that they will be able to spend more time waiting for data from the disk. you'd probably get much better bang for the buck upgrading the storage subsystem than throwing more money at cpus.


In that case I apologise for making the wrong assumption.  People who are more experienced than me analyzed the logs told me that to their surprise the CPU' s were under pressure.  I just assumed that the iowait was the problem having looked at the logs myself.


If you're talking about SSDs for the OS, that's a complete waste; there is essentially no I/O relating to the OS once you've booted.


I also thought this might be an overkill but I was not sure.
 

So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

The best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.



I understand your remark about the CPU in the light of my wrong assumption earlier, but I do not understand your remark about the RAM.  The fact that temporary files of up to 250Gb are created at times during complex queries, is to me an indication of too low RAM.

Question: How do I dedicate a partition to indexes?  Were do I configure PostgreSQL to write them in a particular area?

Regards
Johann

Re: Specifications for a new server

От
Johann Spies
Дата:



On 8 May 2014 10:11, Johann Spies <johann.spies@gmail.com> wrote:

Question: How do I dedicate a partition to indexes?  Were do I configure PostgreSQL to write them in a particular area?



I just discovered TABLESPACE which answered my question.

Regards
Johann

--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)

Re: Specifications for a new server

От
Dorian Hoxha
Дата:
Why do you even want to use JBOD?

Not for postgresql , but for distributed filesystems like hdfs/qfs (which are supposed to work on JBOD) with hypertable on top (so the nvram would help with the commits, since it is the biggest bottleneck when writing(commits need to be saved to multiple servers before 'ok' is returned in the client)).


On Thu, May 8, 2014 at 10:28 AM, Johann Spies <johann.spies@gmail.com> wrote:



On 8 May 2014 10:11, Johann Spies <johann.spies@gmail.com> wrote:

Question: How do I dedicate a partition to indexes?  Were do I configure PostgreSQL to write them in a particular area?



I just discovered TABLESPACE which answered my question.

Regards
Johann

--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)

Re: Specifications for a new server

От
Michael Stone
Дата:
On Thu, May 08, 2014 at 10:11:38AM +0200, Johann Spies wrote:
>I understand your remark about the CPU in the light of my wrong assumption
>earlier, but I do not understand your remark about the RAM.  The fact that
>temporary files of up to 250Gb are created at times during complex queries, is
>to me an indication of too low RAM.

If you can afford infinite RAM, then infinite RAM is great. If your
working set size exceeds the memory size, then you will eventually need
to deal with disk IO. At that point, maybe a bit more memory will help
and maybe it will not--you'll be able to fit a little bit more working
data into memory, but that won't likely radically change the
performance. (If you can afford to fit *everything* you need into RAM
than that's ideal, but that's not the case for most people with
non-trival data sets.) What is certain is that improving the disk IO
performance will improve your overall performance if you're IO bound.

(And the mere existence of temporary files isn't an indication of
insufficient RAM if the system can utilize the memory more efficiently
with the files than it can without them--they could contain data that
isn't needed in a particular phase of a query, freeing up resources that
are needed for other data in that phase.)

Mike Stone


Re: Specifications for a new server

От
Jeff Janes
Дата:
On Thu, May 8, 2014 at 1:11 AM, Johann Spies <johann.spies@gmail.com> wrote:
 

So my questions:

1. Will the SSD's in this case be worth the cost?
2.  What will the best way to utilize them in the system?

The best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.



I understand your remark about the CPU in the light of my wrong assumption earlier, but I do not understand your remark about the RAM.  The fact that temporary files of up to 250Gb are created at times during complex queries, is to me an indication of too low RAM.

Are these PostgreSQL temp files or other temp files?  PostgreSQL doesn't suppress the use of temp files just because you have a lot of RAM.  You would also have to set work_mem to a very large setting, probably inappropriately large, and even that might not work because there other limits on how much memory PostgreSQL can use for any given operation (for example, you can't sort more than 2**32 (or 2**31?) tuples in memory, no matter how much memory you have, and in older versions even less than that).  But that doesn't mean the RAM is not useful.  The OS can use the RAM to buffer the temp files so that they might not ever see the disk, or might not be read from disk because they are still in memory.

SSD is probably wasted on temp files, as they are designed to be accessed mostly sequentially.

Cheers,

Jeff