Обсуждение: Re: Upgrade to dual processor machine?

Поиск
Список
Период
Сортировка

Re: Upgrade to dual processor machine?

От
"Josh Berkus"
Дата:
Henrik,

First off, I'm moving this discussion to the PGSQL-PERFORMANCE list,
where it belongs.   To subscribe, send the message "subscribe
pgsql-perform your@email.address" to "majordomo@postgresql.org".

> This is was I figured out now:
>
> 1) RAM available: 1024 MB, there's nothing else but postgres on this
>    machine, so if I calculate 128 MB for Linux, there are 896 MB left
>    for Postgres.
>
> 2) 70 % of 896 MB is 627 MB
>
> Now, if I follow your instructions:
>
> 250K +
> 8.2K * 128 (shared_buffers) = 1049,6K +
> 14.2K * 64 (max_connections) = 908,8K +
> 1024K * 5000 (average number of requests per minute) = 5120000K
> ===============================================================
> 5122208.4K ==> 5002.16 MB
>
> this is a little bit more than I have available, isn't it? :(((
>
> sure that this has got to be the "average number of requests per
> minute"
> and not "per second" ? seems so much, doesn't it?
>
> what am I supposed to do now?

Well, now it gets more complicated.   You need to determine:
A) The median processing time of each of those requests.
B) The amount of Sort_mem actually required for each request.

I reccommend "per minute" because that's an easy over-estimate ... few
requests last a full minute, and as a result
average-requests-per-minute gives you a safe guage of maximum
concurrent requests (in transactional database environments), which is
really what we are trying to determine.

Um, you do know that I'm talking about *database* requests -- that is,
queries -- and not web page requests, yes?  If you're using server-side
caching, there can be a *huge* difference.

If you have 5000 requests per minute, and only 64 connections, then I
can hypothesize that:
1) you are doing some kind of connection pooling;
2) those are exclusively *read-only* requests;
3) those are very simple requests, or at least processed very quickly.

If all of the above is true, then you can probably base you calculation
on requests-per-second, rather than requests-per-minute.

Then, of course, it becomes an interactive process.  You change the
settings, re-start the database server, and watch the memory used by
the postgreSQL processes.   Your goal is to have that memory usage
hover around 700mb during heavy usage periods (any less, and you are
throttling the database through scarcity of RAM)  but to never, ever,
force usage of Swap memory, which will slow down the server 10-fold.

If you see the RAM only at half that, but the processor at 90%+, then
you should consider upgrading your processor.  But you're more likely
to run out of RAM first.    I believe that you haven't already because
with your low shared-buffer settings, most of the potential sort_mem is
going unused.

BTW, if you are *really* getting 5000 queries per minute, I would
strongly reccomend doubling your RAM.

-Josh Berkus


>
> ----- Original Message -----
> From: "Josh Berkus" <josh@agliodbs.com>
> To: <pgsql-general@postgresql.org>
> Cc: <steffen@city-map.de>
> Sent: Tuesday, November 12, 2002 9:05 PM
> Subject: Re: Upgrade to dual processor machine?
>
>
> Heinrik,
>
> "So, where do i find and change shmmax shmall settings ??
> What should I put there?
>
> What is a recommended value for shared buffers in postgresql.conf ?"
>
> There is no "recommended value."   You have to calculate this
> relatively:
>
> 1) Figure out how much RAM your server has available for PostgreSQL.
>  For
> example, I have one server on which I allocate 256 mb for Apache, 128
> mb for
> linux, and thus have 512mb available for Postgres.
>
> 2) Calculate out the memory settings to use 70% of that amount of Ram
> in
> regular usage.   Please beware that sort_mem is *not* shared, meaning
> that it
> will be multiplied by the number of concurrent requests requiring
> sorting.
> Thus, your calculation (in K) should be:
>
> 250K  +
> 8.2K * shared_buffers +
> 14.2K * max_connections +
> sort_mem * average number of requests per minute
> =====================================
> memory available to postgresql in K * 0.7
>
> You will also have to set SHMMAX and SHMMALL to accept this memory
> allocation.
> Since shmmax is set in bytes, then I generally feel safe making it:
> 1024 * 0.5 * memory available to postgresql in K
>
> Setting them is done simply:
> $ echo 134217728 >/proc/sys/kernel/shmall
> $ echo 134217728 >/proc/sys/kernel/shmmax
>
> This is all taken from the postgresql documentation, with some
> experience:
>
http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html
>
> --
> -Josh Berkus
>  Aglio Database Solutions
>  San Francisco

Re: Upgrade to dual processor machine?

От
"Josh Berkus"
Дата:
Henrik

Oops!   Two corrections, below.   Sorry about the typos.

> First off, I'm moving this discussion to the PGSQL-PERFORMANCE list,
> where it belongs.   To subscribe, send the message "subscribe
> pgsql-perform your@email.address" to "majordomo@postgresql.org".

Sorry ... thats "subscribe pgsql-performance your@email.address".

> Then, of course, it becomes an interactive process.  You change the
> settings, re-start the database server, and watch the memory used by
> the postgreSQL processes.   Your goal is to have that memory usage
> hover around 700mb during heavy usage periods (any less, and you are

That's "600mb", not "700mb".

I also just read Tom's response regarding reserving more RAM for kernel
buffering.   This hasn't been my experience, but then I work mostly
with transactional databases (many read-write requests) rather than
read-only databases.

As such, I'd be interested in a test:  Calculate out your PostgreSQL
RAM to total, say, 256mb and run a speed test on the database.   Then
calculate it out to the previous 600mb, and do the same.   I'd like to
know the results.

In fact, e-mail me off list if you want further help -- I'm interested
in the outcome.

-Josh Berkus

Re: Upgrade to dual processor machine?

От
"Shridhar Daithankar"
Дата:
On 13 Nov 2002 at 9:21, Josh Berkus wrote:
> I also just read Tom's response regarding reserving more RAM for kernel
> buffering.   This hasn't been my experience, but then I work mostly
> with transactional databases (many read-write requests) rather than
> read-only databases.
>
> As such, I'd be interested in a test:  Calculate out your PostgreSQL
> RAM to total, say, 256mb and run a speed test on the database.   Then
> calculate it out to the previous 600mb, and do the same.   I'd like to
> know the results.

I would like to add here. Let's say you do large amount of reads/writes. Make
sure that size of data exceeds RAM allocated to postgresql. Testing 100MB of
data with 256MB or 600MB of buffers isn't going to make any difference in
performance. If this is the practical scenario, then Tom's suggestion is a
better solution.

IMO postgresql buffers should be enough to hold data requierd for
transactions+some good chunk of read only data. Read only data can be left to
OS buffers for most part of it..

HTH

Bye
 Shridhar

--
Sometimes a feeling is all we humans have to go on.        -- Kirk, "A Taste of
Armageddon", stardate 3193.9


Re: [GENERAL] Upgrade to dual processor machine?

От
"Henrik Steffen"
Дата:
ok, now i subscribed to performance, too ;-)

> Well, now it gets more complicated.   You need to determine:
> A) The median processing time of each of those requests.
> B) The amount of Sort_mem actually required for each request.

as I am dealing with postgres for a webserver the median processing
time of each request has got to be <1 sec. how can i measure
the amount of sort_mem needed?

at the highest there are perhaps 20 concurrent database requests
at the same time. i have enabled 64 maximum connections, because
i have apache configured to use persistent database connections
using mod_perl and pg.pm. I set Apache to run MaxClients at 40
(there could additionally be some manual psql connections)

> Um, you do know that I'm talking about *database* requests -- that is,
> queries -- and not web page requests, yes?  If you're using server-side
> caching, there can be a *huge* difference.

yes, I did understand that. And when I measured 5.000 requests per minute
I looked at the pgsql.log (after enabling the debug options and counting
all the queries within minute). so, server-side caching does not appear
within these 5.000 requests... that's for sure.

> If you have 5000 requests per minute, and only 64 connections, then I
> can hypothesize that:
> 1) you are doing some kind of connection pooling;
> 2) those are exclusively *read-only* requests;
> 3) those are very simple requests, or at least processed very quickly

1) correct
2) no, not exclusively - but as it's a webserver-application (www.city-map.de)
most users just read from the database, while they always do an update
to raise some statistics (page-views counters etc.) - furthermore, there is
an internal content-management system where about 100 editors do inserts and
updates. but there are of course more visitors (>10.000 daily) than editors.
3) yes, many requests are very simple for better performance in a web-application

Swapping does never happen so far.


--

Mit freundlichem Gruß

Henrik Steffen
Geschäftsführer

top concepts Internetmarketing GmbH
Am Steinkamp 7 - D-21684 Stade - Germany
--------------------------------------------------------
http://www.topconcepts.com          Tel. +49 4141 991230
mail: steffen@topconcepts.com       Fax. +49 4141 991233
--------------------------------------------------------
24h-Support Hotline:  +49 1908 34697 (EUR 1.86/Min,topc)
--------------------------------------------------------
Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de
System-Partner gesucht: http://www.franchise.city-map.de
--------------------------------------------------------
Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563
--------------------------------------------------------

----- Original Message -----
From: "Josh Berkus" <josh@agliodbs.com>
To: "Henrik Steffen" <steffen@city-map.de>; <josh@agliodbs.com>
Cc: <pgsql-general@postgresql.org>; <pgsql-performance@postgresql.org>
Sent: Wednesday, November 13, 2002 6:05 PM
Subject: Re: [GENERAL] Upgrade to dual processor machine?


> Henrik,
>
> First off, I'm moving this discussion to the PGSQL-PERFORMANCE list,
> where it belongs.   To subscribe, send the message "subscribe
> pgsql-perform your@email.address" to "majordomo@postgresql.org".
>
> > This is was I figured out now:
> >
> > 1) RAM available: 1024 MB, there's nothing else but postgres on this
> >    machine, so if I calculate 128 MB for Linux, there are 896 MB left
> >    for Postgres.
> >
> > 2) 70 % of 896 MB is 627 MB
> >
> > Now, if I follow your instructions:
> >
> > 250K +
> > 8.2K * 128 (shared_buffers) = 1049,6K +
> > 14.2K * 64 (max_connections) = 908,8K +
> > 1024K * 5000 (average number of requests per minute) = 5120000K
> > ===============================================================
> > 5122208.4K ==> 5002.16 MB
> >
> > this is a little bit more than I have available, isn't it? :(((
> >
> > sure that this has got to be the "average number of requests per
> > minute"
> > and not "per second" ? seems so much, doesn't it?
> >
> > what am I supposed to do now?
>
> Well, now it gets more complicated.   You need to determine:
> A) The median processing time of each of those requests.
> B) The amount of Sort_mem actually required for each request.
>
> I reccommend "per minute" because that's an easy over-estimate ... few
> requests last a full minute, and as a result
> average-requests-per-minute gives you a safe guage of maximum
> concurrent requests (in transactional database environments), which is
> really what we are trying to determine.
>
> Um, you do know that I'm talking about *database* requests -- that is,
> queries -- and not web page requests, yes?  If you're using server-side
> caching, there can be a *huge* difference.
>
> If you have 5000 requests per minute, and only 64 connections, then I
> can hypothesize that:
> 1) you are doing some kind of connection pooling;
> 2) those are exclusively *read-only* requests;
> 3) those are very simple requests, or at least processed very quickly.
>
> If all of the above is true, then you can probably base you calculation
> on requests-per-second, rather than requests-per-minute.
>
> Then, of course, it becomes an interactive process.  You change the
> settings, re-start the database server, and watch the memory used by
> the postgreSQL processes.   Your goal is to have that memory usage
> hover around 700mb during heavy usage periods (any less, and you are
> throttling the database through scarcity of RAM)  but to never, ever,
> force usage of Swap memory, which will slow down the server 10-fold.
>
> If you see the RAM only at half that, but the processor at 90%+, then
> you should consider upgrading your processor.  But you're more likely
> to run out of RAM first.    I believe that you haven't already because
> with your low shared-buffer settings, most of the potential sort_mem is
> going unused.
>
> BTW, if you are *really* getting 5000 queries per minute, I would
> strongly reccomend doubling your RAM.
>
> -Josh Berkus
>
>
> >
> > ----- Original Message -----
> > From: "Josh Berkus" <josh@agliodbs.com>
> > To: <pgsql-general@postgresql.org>
> > Cc: <steffen@city-map.de>
> > Sent: Tuesday, November 12, 2002 9:05 PM
> > Subject: Re: Upgrade to dual processor machine?
> >
> >
> > Heinrik,
> >
> > "So, where do i find and change shmmax shmall settings ??
> > What should I put there?
> >
> > What is a recommended value for shared buffers in postgresql.conf ?"
> >
> > There is no "recommended value."   You have to calculate this
> > relatively:
> >
> > 1) Figure out how much RAM your server has available for PostgreSQL.
> >  For
> > example, I have one server on which I allocate 256 mb for Apache, 128
> > mb for
> > linux, and thus have 512mb available for Postgres.
> >
> > 2) Calculate out the memory settings to use 70% of that amount of Ram
> > in
> > regular usage.   Please beware that sort_mem is *not* shared, meaning
> > that it
> > will be multiplied by the number of concurrent requests requiring
> > sorting.
> > Thus, your calculation (in K) should be:
> >
> > 250K  +
> > 8.2K * shared_buffers +
> > 14.2K * max_connections +
> > sort_mem * average number of requests per minute
> > =====================================
> > memory available to postgresql in K * 0.7
> >
> > You will also have to set SHMMAX and SHMMALL to accept this memory
> > allocation.
> > Since shmmax is set in bytes, then I generally feel safe making it:
> > 1024 * 0.5 * memory available to postgresql in K
> >
> > Setting them is done simply:
> > $ echo 134217728 >/proc/sys/kernel/shmall
> > $ echo 134217728 >/proc/sys/kernel/shmmax
> >
> > This is all taken from the postgresql documentation, with some
> > experience:
> >
> http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html
> >
> > --
> > -Josh Berkus
> >  Aglio Database Solutions
> >  San Francisco
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html


Re: [GENERAL] Upgrade to dual processor machine?

От
Josh Berkus
Дата:
Henrik,

> > Well, now it gets more complicated.   You need to determine:
> > A) The median processing time of each of those requests.
> > B) The amount of Sort_mem actually required for each request.
>
> as I am dealing with postgres for a webserver the median processing
> time of each request has got to be <1 sec. how can i measure
> the amount of sort_mem needed?

Through experimentation, mostly.  SInce you are in a high-volume, small-query
environment, I would try *lowering* your sort mem to see if that has an
adverse impact on queries.   A good quick test would be to cut your sort_mem
in half, and then run an EXPLAIN on the query from which you expect the
largest result set, and see if the SORT time on the query has been increased.

--
-Josh Berkus
 Aglio Database Solutions
 San Francisco