Обсуждение: annotated PostgreSQL.conf now up

Поиск
Список
Период
Сортировка

annotated PostgreSQL.conf now up

От
Josh Berkus
Дата:
Folks,

A lot of people have been pestering me for this stuff, so I've finally
finished it and put it up.
http://www.powerpostgresql.com/

Hopefully this should help people as much as the last one did.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: [sfpug] DATA directory on network attached storage

От
Josh Berkus
Дата:
Jeff,

>  Specifically is the performance of
> gigE good enough to allow postgres to perform under load with an NFS
> mounted DATA dir?  Are there other problems I haven't thought about?  Any
> input would be greatly appreciated.

The big problem with NFS-mounted data is that NFS is designed to be a lossy
protocol; that is, sometimes bits get dropped and you just re-request the
file.  This isn't a great idea with databases.

If we were talking SAN, then I don't see any reason why your plan wouldn't
work.  However, what type of failure exactly are you guarding against?  How
likely is a machine failure if its hard drives are external?

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: [sfpug] DATA directory on network attached storage

От
Aditya
Дата:
On Fri, Apr 08, 2005 at 10:01:55AM -0700, Jeff Frost wrote:
> We are currently considering the possibility of creating a warm standby
> machine utilizing heartbeat and a network attached storage device for the
> DATA directory.  The idea being that the warm standby machine has its
> postmaster stopped.  When heartbeat detects the death of the master server,
> the postmaster is started up on the warm standby using the shared DATA
> directory. Other than the obvious problems of both postmasters
> inadvertently attempting access at the same time, I'm curious to know if
> anyone has tried any similar setups and what the experiences have been.
> Specifically is the performance of gigE good enough to allow postgres to
> perform under load with an NFS mounted DATA dir?  Are there other problems
> I haven't thought about?  Any input would be greatly appreciated.

We (Zapatec Inc) have been running lots of Pg dbs off of a Network Appliance
fileserver (NFS TCPv3) with FreeBSD client machines for several years now with
no problems AFAICT other than insufficient bandwidth between servers and the
fileserver (for one application, www.fastbuzz.com, 100baseTX (over a private
switched network) was insufficient, but IDE-UDMA was fine, so GigE would have
worked too, but we couldn't justify purchasing a new GigE adapter for our
Netapp).

We have the same setup as you would like, allowing for warm standby(s),
however we haven't had to use them at all.

We have not, AFAICT, had any problems with the traffic over NFS as far as
reliability -- I'm sure there is a performance penalty, but the reliability
and scalability gains more than offset that.

FWIW, if I were to do this anew, I would probably opt for iSCSI over GigE with
a NetApp.

Adi

DATA directory on network attached storage

От
Jeff Frost
Дата:
We are currently considering the possibility of creating a warm standby
machine utilizing heartbeat and a network attached storage device for the DATA
directory.  The idea being that the warm standby machine has its postmaster
stopped.  When heartbeat detects the death of the master server, the
postmaster is started up on the warm standby using the shared DATA directory.
Other than the obvious problems of both postmasters inadvertently attempting
access at the same time, I'm curious to know if anyone has tried any similar
setups and what the experiences have been.  Specifically is the performance of
gigE good enough to allow postgres to perform under load with an NFS mounted
DATA dir?  Are there other problems I haven't thought about?  Any input would
be greatly appreciated.

Thanks!

--
Jeff Frost, Owner     <jeff@frostconsultingllc.com>
Frost Consulting, LLC     http://www.frostconsultingllc.com/
Phone: 650-780-7908    FAX: 650-649-1954

Re: [sfpug] DATA directory on network attached storage

От
Joe Conway
Дата:
Aditya wrote:
> We have not, AFAICT, had any problems with the traffic over NFS as far as
> reliability -- I'm sure there is a performance penalty, but the reliability
> and scalability gains more than offset that.

My experience agrees with yours. However we did find one gotcha -- see
the thread starting here for details:
http://archives.postgresql.org/pgsql-hackers/2004-12/msg00479.php

In a nutshell, be careful when using an nfs mounted data directory
combined with an init script that creates a new data dir when it doesn't
find one.

> FWIW, if I were to do this anew, I would probably opt for iSCSI over GigE with
> a NetApp.

Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
IIRC because of performance.

Joe

Re: [sfpug] DATA directory on network attached storage

От
Joe Conway
Дата:
Aditya wrote:
> On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:
>>Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
>>IIRC because of performance.
>
> I would mount the Netapp volume(s) as a block level device on my server using
> iSCSI (vs. a file-based device like NFS) so that filesystem parameters could
> be more finely tuned and one could really make use of jumbo frames over GigE.

Actually, we're using jumbo frames over GigE with nfs too.

> I'm not sure I understand why NFS would perform better than iSCSI -- in any
> case, some large Oracle dbs at my current job are moving to iSCSI on Netapp
> and in that environment both Oracle and Netapp advise iSCSI (probably because
> Oracle uses the block-level device directly), so I suspend the difference in
> performance is minimal.

We also have Oracle DBs via nfs mounted Netapp, again per the local
guru's advice. It might be one of those things that is still being
debated even within Netapp's ranks (or maybe our info is dated - worth a
check).

Thanks,

Joe

Re: [sfpug] DATA directory on network attached storage

От
Aditya
Дата:
On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:
> >FWIW, if I were to do this anew, I would probably opt for iSCSI over GigE
> >with
> >a NetApp.
>
> Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
> IIRC because of performance.

I would mount the Netapp volume(s) as a block level device on my server using
iSCSI (vs. a file-based device like NFS) so that filesystem parameters could
be more finely tuned and one could really make use of jumbo frames over GigE.

But that level of tuning depends on load after all and with a Netapp you can
have both, so maybe start with having your databases on an NFS volume on the
Netapp, and when you have a better idea of the tuning requirements, move it
over to a iSCSI LUN.

I'm not sure I understand why NFS would perform better than iSCSI -- in any
case, some large Oracle dbs at my current job are moving to iSCSI on Netapp
and in that environment both Oracle and Netapp advise iSCSI (probably because
Oracle uses the block-level device directly), so I suspend the difference in
performance is minimal.

Adi