Обсуждение: databases limit
Hi, What/Where i need set/configure to limit the numbers of databases which a normal user can create? -- Fabio Angeluci Martins famartins at gashost com www.gashost.com UIN: 24397990
On Wed, Feb 05, 2003 at 18:48:44 -0200, Fabio Angeluci Martins <famartins@gashost.com> wrote: > Hi, > > What/Where i need set/configure to limit the numbers of databases which a > normal user can create? Either a user can create as many databases as they want or they can't create any. If you use createuser to create the user you will get asked which of the cases to apply to the user. In recent versions of postgres the superuser can create a database that is owned by another user. So if users don't need too many or to have them created instantly, have them send requests to a superuser to create them on there behalf. In 7.3 schemas were added and this might allow you to limit each user to one (or just a few) databases, depending on what your are trying to achieve.
Bruno Wolff III <bruno@wolff.to> writes: > On Wed, Feb 05, 2003 at 18:48:44 -0200, > Fabio Angeluci Martins <famartins@gashost.com> wrote: >> What/Where i need set/configure to limit the numbers of databases which a >> normal user can create? > Either a user can create as many databases as they want or they can't create > any. I have a feeling that what the questioner really means is "how can I limit the resources consumed by any one database user?" In which case the problem is a lot bigger than limiting the number of times you can do CREATE DATABASE. What about creating a lot of tables in a single database? Or attaching a lot of indexes to a table? Or inserting a huge number of rows in one table? Or running queries that take forever and chew up all the CPU cycles? Postgres doesn't really attempt to deal with any of these problems. regards, tom lane
On Thu, Feb 06, 2003 at 12:30:03AM -0500, Tom Lane wrote: > I have a feeling that what the questioner really means is "how can I > limit the resources consumed by any one database user?" In which case (I'm moving this to -hackers 'cause I think it likely belongs there.) I note that this question has come up before, and several people have been sceptical of its utility. In particular, in this thread <http://groups.google.ca/groups?hl=en&lr=&ie=UTF-8&threadm=Pine.LNX.4.21.0212221510560.15719-100000%40linuxworld.com.au&rnum=1&prev=/groups%3Fq%3Dlimit%2Bresources%2B%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3DPine.LNX.4.21.0212221510560.15719-100000%2540linuxworld.com.au%26rnum%3D1> (sorry about the long line: I just get errors searching at the official archives) Tom Lane notes that you could just run another back end to make things more secure. That much is true; but I'm wondering whether it might be worth it to limit how much a _database_ can use. For instance, suppose I have a number of databases which are likely to see sporadic heavy loads. There are limitations on how slow the response can be. So I have to do some work to guarantee that, for instance, certain tables from each database don't get flushed from the buffers. I can do this now by setting up separate postmasters. That way, each gets its own shared memory segment. Those "certain tables" will be ones that are frequently accessed, and so they'll always remain in the buffer, even if the other database is busy (because the two databases don't share a buffer). (I'm imagining the case -- not totally imaginary -- where one of the databases tends to be accessed heavily during one part of a 24 hour day, and another database gets hit more on another part of the same day.) The problem with this scenario is that it makes administration somewhat awkward as soon as you have to do this 5 or 6 times. I was thinking that it might be nice to be able to limit how much of the total resources a given database can consume. If one database were really busy, that would not mean that other databases would automatically be more sluggish, because they would still have some guaranteed minimum percentage of the total resources. So, anyone care to speculate? -- ---- Andrew Sullivan 204-4141 Yonge Street Liberty RMS Toronto, Ontario Canada <andrew@libertyrms.info> M2P 2A8 +1 416 646 3304 x110
Tks very much for the help andrew, tom i'm still waiting the response in the hackers list too ... On Thursday 06 February 2003 14:15, you wrote: > On Thu, Feb 06, 2003 at 12:30:03AM -0500, Tom Lane wrote: > > I have a feeling that what the questioner really means is "how can I > > limit the resources consumed by any one database user?" In which case > > (I'm moving this to -hackers 'cause I think it likely belongs there.) > > I note that this question has come up before, and several people have > been sceptical of its utility. In particular, in this thread > > <http://groups.google.ca/groups?hl=en&lr=&ie=UTF-8&threadm=Pine.LNX.4.21.02 >12221510560.15719-100000%40linuxworld.com.au&rnum=1&prev=/groups%3Fq%3Dlimit >%2Bresources%2B%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3 >DUTF-8%26selm%3DPine.LNX.4.21.0212221510560.15719-100000%2540linuxworld.com. >au%26rnum%3D1> > > (sorry about the long line: I just get errors searching at the official > archives) Tom Lane notes that you could just run another back end to > make things more secure. > > That much is true; but I'm wondering whether it might be worth it to > limit how much a _database_ can use. For instance, suppose I have a > number of databases which are likely to see sporadic heavy loads. > There are limitations on how slow the response can be. So I have to > do some work to guarantee that, for instance, certain tables from > each database don't get flushed from the buffers. > > I can do this now by setting up separate postmasters. That way, each > gets its own shared memory segment. Those "certain tables" will be > ones that are frequently accessed, and so they'll always remain in > the buffer, even if the other database is busy (because the two > databases don't share a buffer). (I'm imagining the case -- not > totally imaginary -- where one of the databases tends to be accessed > heavily during one part of a 24 hour day, and another database gets > hit more on another part of the same day.) > > The problem with this scenario is that it makes administration > somewhat awkward as soon as you have to do this 5 or 6 times. I was > thinking that it might be nice to be able to limit how much of the > total resources a given database can consume. If one database were > really busy, that would not mean that other databases would > automatically be more sluggish, because they would still have some > guaranteed minimum percentage of the total resources. > > So, anyone care to speculate? -- Fabio Angeluci Martins famartins at gashost com www.gashost.com UIN: 24397990