Обсуждение: appropriate sort_mem & shared buffers
Hello, for a machine with 2 gig mem (1ghz pIII) is sort_mem = 128 # 64 Mebibyte shared_buffers = 65536 # 512 Mebibyte appropriate and reasonable? Markus Bertheau
Вложения
Markus Bertheau <twanger@bluetwanger.de> writes: > for a machine with 2 gig mem (1ghz pIII) is > sort_mem =3D 128 # 64 Mebibyte sort_mem is expressed in Kbytes, so you are not getting the results you think. sort_mem = 65536 would equate to 64MB. > shared_buffers =3D 65536 # 512 Mebibyte > appropriate and reasonable? I'd say both of those are on the high side. I'd go with half or a quarter of those numbers as a starting point, ie sort_mem maybe about 16meg (16000), shared_buffers also maybe 16000. But since you didn't say how many concurrent backends you expect to run, this is all just theorizing in the dark. regards, tom lane
On Fri, 21 Dec 2001, Tom Lane wrote: > But since you didn't > say how many concurrent backends you expect to run, this is all just > theorizing in the dark. What is the relation between number of connections and buffers? For example I have two needs. One is to have few connections on some data cleanup. There I figure I should have higher buffers/sort_mem (buffers currently set to 8000 with 1GB of RAM). The other is a database for a web site where I would expect many users. I was thinking that the higher the buffer size the longer it would take for the database to allocate the memory before it can serve the query. Given that on the second machine the tables are also smaller I was thinking something in the order to 500 Buffers (machine has 512MB and I expect peak to be 5 to 10 users simultaneously with mostly simple queries on relatively small tables or indexed access).
Francisco Reyes <lists@natserv.com> writes: >> But since you didn't >> say how many concurrent backends you expect to run, this is all just >> theorizing in the dark. > What is the relation between number of connections and buffers? You need to ensure you have enough RAM left over (after allocating the buffers) for all the backends you want to run. Don't forget that sort_mem is per-backend, too. My gut feeling is that a few thousand buffers (corresponding to a few tens of megabytes of RAM) is the right range for production situations on most modern machines. As an example, yesterday I was running some tests of pgbench with scale factor 50 and 64 clients, on a Linux box with 256MB RAM. At -B 4096 I got about 40 transactions/sec; at -B 8192 I got about 55; at -B 16384 it dropped off to about 32 t/s. vmstat showed considerable swapping activity in the last case, so evidently I had run the machine out of RAM by allocating too much to buffers. > I was thinking that the higher the > buffer size the longer it would take for the database to allocate the > memory before it can serve the query. No. The shared buffers are a static allocation that is made once when the postmaster starts. regards, tom lane
On Mon, 31 Dec 2001, Tom Lane wrote: > > I was thinking that the higher the > > buffer size the longer it would take for the database to allocate the > > memory before it can serve the query. > > No. The shared buffers are a static allocation that is made once when > the postmaster starts. The web site is using PHP so a connection is stablished every time. The little I have read little about permanent(persistent?) connections seems so will look into that later. After this is in production I will start looking at the archives of the pgsql-php list to see the current state of php permanent connections. So if PHP is stablishing a new connection every time isn't a new instance of the backend, toghether with the overhead of allocating memory space, been started? On the apache web server one can have a few instances running waiting for requests. Can that be done with pgsql, or does it even make sense to have such arrangement on pgsql?