Обсуждение: Configuring Shared Buffers
Hello there,
Folks, I am configuring a new Postgres Server, that's gonna substitute the critical server of the enterprise.
I have a good machine:
Quad-Core 2.5 Ghz
4 Gb RAM
1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory
My question is:
How much I could assign for the "Shared Buffers" parameter?
The default assigned now is 24 Mb.
Thanks
Rafael Domiciano
Folks, I am configuring a new Postgres Server, that's gonna substitute the critical server of the enterprise.
I have a good machine:
Quad-Core 2.5 Ghz
4 Gb RAM
1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory
My question is:
How much I could assign for the "Shared Buffers" parameter?
The default assigned now is 24 Mb.
Thanks
Rafael Domiciano
On Mon, Jun 30, 2008 at 02:52:05PM -0300, Rafael Domiciano wrote: > Folks, I am configuring a new Postgres Server, that's gonna substitute the > critical server of the enterprise. > I have a good machine: > Quad-Core 2.5 Ghz > 4 Gb RAM > 1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory If it's the "critical server", you'd want to use a RAID1 for the disk so your server and data survives a harddisk crash. > My question is: > How much I could assign for the "Shared Buffers" parameter? > > The default assigned now is 24 Mb. That's rather low. I suppose, you may safely say 200Mb here (check that you don't run into OS limits, they might need raising). You may also want to increase work_mem, but it depends on what the DB does, how many concurrent connections you have etc. pp. maintenance_work_mem may safely be set rather high - it speeds up vacuum etc. and is only used during such operations. Also look for effective_cache_size and max_fsm_pages. BTW: What version are you going to use? Tino. -- "What we nourish flourishes." - "Was wir nähren erblüht." www.craniosacralzentrum.de www.forteego.de
The Postgres version is 8.3.3 and I am using Fedora Core 8.
I have in the actual server around 70 connections the same time. I am assigning for this 100.
I have in the actual server around 70 connections the same time. I am assigning for this 100.
2008/6/30 Tino Schwarze <postgresql@tisc.de>:
On Mon, Jun 30, 2008 at 02:52:05PM -0300, Rafael Domiciano wrote:If it's the "critical server", you'd want to use a RAID1 for the disk so
> Folks, I am configuring a new Postgres Server, that's gonna substitute the
> critical server of the enterprise.
> I have a good machine:
> Quad-Core 2.5 Ghz
> 4 Gb RAM
> 1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory
your server and data survives a harddisk crash.That's rather low. I suppose, you may safely say 200Mb here (check that
> My question is:
> How much I could assign for the "Shared Buffers" parameter?
>
> The default assigned now is 24 Mb.
you don't run into OS limits, they might need raising). You may also
want to increase work_mem, but it depends on what the DB does, how many
concurrent connections you have etc. pp. maintenance_work_mem may safely
be set rather high - it speeds up vacuum etc. and is only used during
such operations. Also look for effective_cache_size and max_fsm_pages.
BTW: What version are you going to use?
Tino.
--
"What we nourish flourishes." - "Was wir nähren erblüht."
www.craniosacralzentrum.de
www.forteego.de
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
On Mon, Jun 30, 2008 at 04:01:14PM -0300, Rafael Domiciano wrote: > The Postgres version is 8.3.3 and I am using Fedora Core 8. > I have in the actual server around 70 connections the same time. I am > assigning for this 100. And what does the server do? Mainly SELECT / UPDATE / INSERT / DELETE, how is workload distributed, are queries very complex or pretty simple are there huge joins across many tables? How many tables do you have and what are the largest ones? Are we talking about thousands of rows or millions or lots of millions? Tino. > > > Folks, I am configuring a new Postgres Server, that's gonna substitute > > the > > > critical server of the enterprise. > > > I have a good machine: > > > Quad-Core 2.5 Ghz > > > 4 Gb RAM > > > 1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory > > > > If it's the "critical server", you'd want to use a RAID1 for the disk so > > your server and data survives a harddisk crash. > > > > > My question is: > > > How much I could assign for the "Shared Buffers" parameter? > > > > > > The default assigned now is 24 Mb. > > > > That's rather low. I suppose, you may safely say 200Mb here (check that > > you don't run into OS limits, they might need raising). You may also > > want to increase work_mem, but it depends on what the DB does, how many > > concurrent connections you have etc. pp. maintenance_work_mem may safely > > be set rather high - it speeds up vacuum etc. and is only used during > > such operations. Also look for effective_cache_size and max_fsm_pages. > > > > BTW: What version are you going to use? > > > > Tino. > > > > -- > > "What we nourish flourishes." - "Was wir nähren erblüht." > > > > www.craniosacralzentrum.de > > www.forteego.de > > > > -- > > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-admin > > -- "What we nourish flourishes." - "Was wir nähren erblüht." www.craniosacralzentrum.de www.forteego.de
Yes, the server just does S, U, I and D.
The queries is pretty simples, don't have huge joins across tables...
In this server I have around 500 tables, the largest are:
9 millions tuples | 5 millions tuples
This server is our "authenticator"; the response is need to be "imediatly".
Today, the monitoring of the server is done using the unix command: loguptime
With the time we discovered that when loguptime is up than 1.0 the response begin to late, and the applications begin to take "Time-out".
The queries is pretty simples, don't have huge joins across tables...
In this server I have around 500 tables, the largest are:
9 millions tuples | 5 millions tuples
This server is our "authenticator"; the response is need to be "imediatly".
Today, the monitoring of the server is done using the unix command: loguptime
With the time we discovered that when loguptime is up than 1.0 the response begin to late, and the applications begin to take "Time-out".
2008/6/30 Tino Schwarze <postgresql@tisc.de>:
On Mon, Jun 30, 2008 at 04:01:14PM -0300, Rafael Domiciano wrote:And what does the server do? Mainly SELECT / UPDATE / INSERT / DELETE,
> The Postgres version is 8.3.3 and I am using Fedora Core 8.
> I have in the actual server around 70 connections the same time. I am
> assigning for this 100.
how is workload distributed, are queries very complex or pretty simple
are there huge joins across many tables?
How many tables do you have and what are the largest ones? Are we talking
about thousands of rows or millions or lots of millions?
Tino.--
> > > Folks, I am configuring a new Postgres Server, that's gonna substitute
> > the
> > > critical server of the enterprise.
> > > I have a good machine:
> > > Quad-Core 2.5 Ghz
> > > 4 Gb RAM
> > > 1 Dedicated HD 300 Gb Sata for the PostgreSQL Directory
> >
> > If it's the "critical server", you'd want to use a RAID1 for the disk so
> > your server and data survives a harddisk crash.
> >
> > > My question is:
> > > How much I could assign for the "Shared Buffers" parameter?
> > >
> > > The default assigned now is 24 Mb.
> >
> > That's rather low. I suppose, you may safely say 200Mb here (check that
> > you don't run into OS limits, they might need raising). You may also
> > want to increase work_mem, but it depends on what the DB does, how many
> > concurrent connections you have etc. pp. maintenance_work_mem may safely
> > be set rather high - it speeds up vacuum etc. and is only used during
> > such operations. Also look for effective_cache_size and max_fsm_pages.
> >
> > BTW: What version are you going to use?
> >
> > Tino.
> >
> > --
> > "What we nourish flourishes." - "Was wir nähren erblüht."
> >
> > www.craniosacralzentrum.de
> > www.forteego.de
> >
> > --
> > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-admin
> >"What we nourish flourishes." - "Was wir nähren erblüht."
www.craniosacralzentrum.de
www.forteego.de
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Rafael Domiciano escribió: > Yes, the server just does S, U, I and D. > The queries is pretty simples, don't have huge joins across tables... > In this server I have around 500 tables, the largest are: > 9 millions tuples | 5 millions tuples > > This server is our "authenticator"; the response is need to be "imediatly". > Today, the monitoring of the server is done using the unix command: > loguptime > With the time we discovered that when loguptime is up than 1.0 the response > begin to late, and the applications begin to take "Time-out". Have you tuned autovacuum? Maybe you need it to be more frequent on certain tables or something. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
No, i don't have autovacum.
We tried it sometime ago, and the applications began to take "time-out".
I think we don't need autovacum, along the day the application don't get slow, sometimes is faster (maybe the use of the applications is less).
The actual machine is Dual Core Xeon 1.5 and 2 Gb RAM.
We tried it sometime ago, and the applications began to take "time-out".
I think we don't need autovacum, along the day the application don't get slow, sometimes is faster (maybe the use of the applications is less).
The actual machine is Dual Core Xeon 1.5 and 2 Gb RAM.
2008/6/30 Alvaro Herrera <alvherre@commandprompt.com>:
Rafael Domiciano escribió:> Yes, the server just does S, U, I and D.Have you tuned autovacuum? Maybe you need it to be more frequent on
> The queries is pretty simples, don't have huge joins across tables...
> In this server I have around 500 tables, the largest are:
> 9 millions tuples | 5 millions tuples
>
> This server is our "authenticator"; the response is need to be "imediatly".
> Today, the monitoring of the server is done using the unix command:
> loguptime
> With the time we discovered that when loguptime is up than 1.0 the response
> begin to late, and the applications begin to take "Time-out".
certain tables or something.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
Rafael Domiciano escribió: > No, i don't have autovacum. > We tried it sometime ago, and the applications began to take "time-out". > I think we don't need autovacum, along the day the application don't get > slow, sometimes is faster (maybe the use of the applications is less). > The actual machine is Dual Core Xeon 1.5 and 2 Gb RAM. So how are you vacuuming your tables? -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
I have schedelus to run vacuum at the night, it starts at 01:00 AM
2008/6/30 Alvaro Herrera <alvherre@commandprompt.com>:
Rafael Domiciano escribió:> No, i don't have autovacum.So how are you vacuuming your tables?
> We tried it sometime ago, and the applications began to take "time-out".
> I think we don't need autovacum, along the day the application don't get
> slow, sometimes is faster (maybe the use of the applications is less).
> The actual machine is Dual Core Xeon 1.5 and 2 Gb RAM.
--Alvaro Herrera http://www.CommandPrompt.com/PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Rafael Domiciano escribió: > I have schedelus to run vacuum at the night, it starts at 01:00 AM Do you execute lots of updates or deletes on some tables? -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
No, but is more common Insert and Update than delete.
While I have around 50.000 Insert and Update, in delete is 5.000
While I have around 50.000 Insert and Update, in delete is 5.000
2008/6/30 Alvaro Herrera <alvherre@commandprompt.com>:
Rafael Domiciano escribió:> I have schedelus to run vacuum at the night, it starts at 01:00 AMDo you execute lots of updates or deletes on some tables?
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Updates = delete / insert inserts only create dead tuples if they fail. So, assuming no failed inserts, you're creating 55k dead tuples a day. You can run vacuum verbose to get a report on how many dead tuples your tables / database has to get an idea if you're vacuuming often enough. If vacuuming makes the machine too slow look at setting vacuum_cost_delay = 10 and see if that helps. Vacuums will take longer but won't be as intrusive. On Mon, Jun 30, 2008 at 6:12 PM, Rafael Domiciano <rafael.domiciano@gmail.com> wrote: > No, but is more common Insert and Update than delete. > While I have around 50.000 Insert and Update, in delete is 5.000 > > 2008/6/30 Alvaro Herrera <alvherre@commandprompt.com>: >> >> Rafael Domiciano escribió: >> > I have schedelus to run vacuum at the night, it starts at 01:00 AM >> >> Do you execute lots of updates or deletes on some tables? >> >> -- >> Alvaro Herrera >> http://www.CommandPrompt.com/ >> PostgreSQL Replication, Consulting, Custom Development, 24x7 support > >
Rafael Domiciano escribió: > The Postgres version is 8.3.3 and I am using Fedora Core 8. > I have in the actual server around 70 connections the same time. I am > assigning for this 100. 100 MB? That's not very much either. You can probably get a hefty performance boost by upping it a lot more (depending on whether the machine is doing something else, or Postgres is the only service running on it.) -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
On Mon, Jun 30, 2008 at 7:09 PM, Alvaro Herrera <alvherre@commandprompt.com> wrote: > Rafael Domiciano escribió: >> The Postgres version is 8.3.3 and I am using Fedora Core 8. >> I have in the actual server around 70 connections the same time. I am >> assigning for this 100. > > 100 MB? That's not very much either. You can probably get a hefty > performance boost by upping it a lot more (depending on whether the > machine is doing something else, or Postgres is the only service running > on it.) I think he was talking about max connections. As for shared buffers, I'd start out at about 25% main memory for 8.3.3 and test that. Also, he should be looking at his setting for work_mem
Scott Marlowe escribió: > On Mon, Jun 30, 2008 at 7:09 PM, Alvaro Herrera > <alvherre@commandprompt.com> wrote: > > Rafael Domiciano escribió: > >> The Postgres version is 8.3.3 and I am using Fedora Core 8. > >> I have in the actual server around 70 connections the same time. I am > >> assigning for this 100. > > > > 100 MB? That's not very much either. You can probably get a hefty > > performance boost by upping it a lot more (depending on whether the > > machine is doing something else, or Postgres is the only service running > > on it.) > > I think he was talking about max connections. I'm not sure. My suggestion to Rafael is to try subscribing to the pgsql-es-ayuda list and asking there (assuming he's really spanish speaking). -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
I agree. I have a similar system that I use for development purposes and have the shared_buffers sitting comfortable around 1GB. On production systems with 16GB of RAM, I've seen this as high as 12GB. There is talk nowadays, however, that this setting could drop back down to defaults on modern installations and let the OS handle cached memory as it sees fit. In any case, the biggest performance gain I see for you would be setting work_mem appropriately. This is the memory postgres is permitted to use for sorts, merges, hash joins, etc. before being forced to disk. It defaults to 1MB. In my opinion, this is far too low. This is the memory allocated to each sort/hash/etc operation. So for a complicated query, postgres could use several allocations. Even though, I think you could raise this considerably. If your system is a dedicated postgres box, I would take the total memory, subtract that needed for the OS, subtract what you decided to use for shared_buffers, and divide the rest by your 100 connections. So, for you, I see this around 30MB. On my development box with only a few connections, I have this around 500MB and sometime spike it to 1.2GB on the fly before a long running query. Cheers, Kevin Alvaro Herrera wrote: > Rafael Domiciano escribió: > >> The Postgres version is 8.3.3 and I am using Fedora Core 8. >> I have in the actual server around 70 connections the same time. I am >> assigning for this 100. >> > > 100 MB? That's not very much either. You can probably get a hefty > performance boost by upping it a lot more (depending on whether the > machine is doing something else, or Postgres is the only service running > on it.) > > >
Yes, I was talking about 100 max connections...
I am doing some benchmarks tests with diferent configurations...
I'll try what you said Kevin.
Thank you All
I am doing some benchmarks tests with diferent configurations...
I'll try what you said Kevin.
Thank you All
2008/6/30 Kevin Neufeld <kneufeld@refractions.net>:
I agree. I have a similar system that I use for development purposes and have the shared_buffers sitting comfortable around 1GB. On production systems with 16GB of RAM, I've seen this as high as 12GB. There is talk nowadays, however, that this setting could drop back down to defaults on modern installations and let the OS handle cached memory as it sees fit.
In any case, the biggest performance gain I see for you would be setting work_mem appropriately. This is the memory postgres is permitted to use for sorts, merges, hash joins, etc. before being forced to disk. It defaults to 1MB. In my opinion, this is far too low. This is the memory allocated to each sort/hash/etc operation. So for a complicated query, postgres could use several allocations. Even though, I think you could raise this considerably. If your system is a dedicated postgres box, I would take the total memory, subtract that needed for the OS, subtract what you decided to use for shared_buffers, and divide the rest by your 100 connections. So, for you, I see this around 30MB. On my development box with only a few connections, I have this around 500MB and sometime spike it to 1.2GB on the fly before a long running query.
Cheers,
Kevin
Alvaro Herrera wrote:Rafael Domiciano escribió:
The Postgres version is 8.3.3 and I am using Fedora Core 8.
I have in the actual server around 70 connections the same time. I am
assigning for this 100.
100 MB? That's not very much either. You can probably get a hefty
performance boost by upping it a lot more (depending on whether the
machine is doing something else, or Postgres is the only service running
on it.)
--Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin