Обсуждение: Re: [HACKERS] keeping track of connections
> On Wed, 3 June 1998, at 20:29:52, David Gould wrote: > > > Ok, can I laugh now? > > > > Seriously, if we are going to have a separate backend to do the table access > > (and I agree that this is both neccessary and reasonable), why not have it > > be a plain ordinary backend like all the others and just connect to it from > > the client? Why get the postmaster involved at all? > > I'm confused, I guess. > > > > First, modifying the postmaster to add services has a couple of problems: > > I wasn't quite suggesting this, I think we should just modify the > postmaster to store the information. As you say below, doing queries > is probably bad, shared memory seems like the way to go. I'll assume > we'll use a different block of shared memory than the one currently > used. Oh, ok. Some suggestions have been made the the postmaster would open a connection to it's own backend to do queries. I was responding to this. I agree that we should just store the information in shared memory. > do you know how shared memory is currently used? I'm fairly clueless > on this aspect. The shared memory stores the process table, the lock table, the buffer cache, and the shared invalidate list, and a couple of other minor things that all the backends need to know about. Strangely, the shared memory does not share a copy of the system catalog cache. This seems like a real misfeature as the catalog data is very useful to all the backends. The shared memory is managed by its own allocator. It is not hard to carve out a block for a new use, the only real trick is to make sure you account for it when the system starts up so it can get the size right as the shared memory is not extendable. > > - we have to modify the postmaster. This adds code bloat and bugs etc, and > > since the same binary is also the backend, it means the backends carry > > around extra baggage that only is used in the postmaster. > > the reverse could also be said -- why does the postmaster need the > bloat of a backend? Well, right now the postmaster and the backend are the same binary. This has the advantage of keeping them in sync as we make changes, and now with Bruces patch we can avoid an exec() on backend startup. Illustra has a separate backend and postmaster binary. This works too, but they share a lot of code and sometimes a change in something you thought was only in the backend will break the postmaster. > > - more importantly, if the postmaster is busy processing a big select from > > a pseudo table or log (well, forwarding results etc), then it cannot also > > respond to a new connection request. Unless we multithread the postmaster. > good point. I think storing this information in shared memory and > accessing it from a view is good -- how do other dbs do this sort of > thing? Well, it is not really a view, although a view is a good analogy. The term of art is pseudo-table. That is, a table you generate on the fly. This concept is very useful as you can use it to read text files or rows from some other database (think gateways) etc. It is also pretty common. Sybase and Informix both support system specific pseudo-tables. Illustra supports extendable access methods where you can plug a set of functions (opentable, getnext, update, delete, insert etc) into the server and they create a table interface to whatever datasource you want. -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
> Oh, ok. Some suggestions have been made the the postmaster would open a > connection to it's own backend to do queries. I was responding to this. > I agree that we should just store the information in shared memory. > > > do you know how shared memory is currently used? I'm fairly clueless > > on this aspect. > > The shared memory stores the process table, the lock table, the buffer cache, > and the shared invalidate list, and a couple of other minor things that all > the backends need to know about. > > Strangely, the shared memory does not share a copy of the system catalog > cache. This seems like a real misfeature as the catalog data is very useful > to all the backends. On TODO list. Vadim wants to do this, perhaps for 6.4(not sure): * Shared catalog cache, reduce lseek()'s by caching table size in shared area > > The shared memory is managed by its own allocator. It is not hard to carve > out a block for a new use, the only real trick is to make sure you account > for it when the system starts up so it can get the size right as the shared > memory is not extendable. > > > > - we have to modify the postmaster. This adds code bloat and bugs etc, and > > > since the same binary is also the backend, it means the backends carry > > > around extra baggage that only is used in the postmaster. > > > > the reverse could also be said -- why does the postmaster need the > > bloat of a backend? > > Well, right now the postmaster and the backend are the same binary. This > has the advantage of keeping them in sync as we make changes, and now with > Bruces patch we can avoid an exec() on backend startup. Illustra has a > separate backend and postmaster binary. This works too, but they share a > lot of code and sometimes a change in something you thought was only in the > backend will break the postmaster. Then a good reason not to split them up. > Well, it is not really a view, although a view is a good analogy. The term > of art is pseudo-table. That is, a table you generate on the fly. This concept > is very useful as you can use it to read text files or rows from some other > database (think gateways) etc. It is also pretty common. Sybase and Informix > both support system specific pseudo-tables. Illustra supports extendable > access methods where you can plug a set of functions (opentable, getnext, > update, delete, insert etc) into the server and they create a table interface > to whatever datasource you want. Yes, this would be nice, but don't we have more important items to the TODO list to address? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
On Thu, 4 Jun 1998, David Gould wrote: > Oh, ok. Some suggestions have been made the the postmaster would open a > connection to it's own backend to do queries. I was responding to this. > I agree that we should just store the information in shared memory. How does one get a history for long term monitoring and statistics by storing in shared memory? Marc G. Fournier Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
> On Thu, 4 Jun 1998, David Gould wrote: > > > Oh, ok. Some suggestions have been made the the postmaster would open a > > connection to it's own backend to do queries. I was responding to this. > > I agree that we should just store the information in shared memory. > > How does one get a history for long term monitoring and statistics > by storing in shared memory? > > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org My thought was a circular event buffer which could provide short term history. If someone wanted to store long term history (most sites probably won't, but I agree it can be useful), they would have an application which queried the short term history and saved it to what ever long term history they wanted. Eg: FOREVER { sleep(1); insert into long_term_hist values (select * from pg_eventlog where event_num > highest_seen_so_far); } Obviously some details need to be worked out to make sure no history is ever lost (if that is important). But the basic mechanism is general and useful for many purposes. -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software 300 Lakeside Drive Oakland, CA 94612 - A child of five could understand this! Fetch me a child of five.
Bruce Momjian wrote: > > > Strangely, the shared memory does not share a copy of the system catalog > > cache. This seems like a real misfeature as the catalog data is very useful > > to all the backends. > > On TODO list. Vadim wants to do this, perhaps for 6.4(not sure): > > * Shared catalog cache, reduce lseek()'s by caching table size in shared area Yes, for 6.4... Vadim
> > Bruce Momjian wrote: > > > > > Strangely, the shared memory does not share a copy of the system catalog > > > cache. This seems like a real misfeature as the catalog data is very useful > > > to all the backends. > > > > On TODO list. Vadim wants to do this, perhaps for 6.4(not sure): > > > > * Shared catalog cache, reduce lseek()'s by caching table size in shared area > > Yes, for 6.4... Can you share any other 6.4 plans with us? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > On Thu, 4 Jun 1998, David Gould wrote: > > > Oh, ok. Some suggestions have been made the the postmaster would open a > > connection to it's own backend to do queries. I was responding to this. > > I agree that we should just store the information in shared memory. > > How does one get a history for long term monitoring and statistics > by storing in shared memory? > > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org Why not simply append history lines to a normal log file ? In this way you don't have the overhead for accessing tables and can do real-time processing of the data with a simple tail -f on the file. I use this trick to monitor the log file written by 30 backends and it works fine for me. -- Massimo Dal Zotto +----------------------------------------------------------------------+ | Massimo Dal Zotto e-mail: dz@cs.unitn.it | | Via Marconi, 141 phone: ++39-461-534251 | | 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ | | Italy pgp: finger dz@tango.cs.unitn.it | +----------------------------------------------------------------------+
> > > > > On Thu, 4 Jun 1998, David Gould wrote: > > > > > Oh, ok. Some suggestions have been made the the postmaster would open a > > > connection to it's own backend to do queries. I was responding to this. > > > I agree that we should just store the information in shared memory. > > > > How does one get a history for long term monitoring and statistics > > by storing in shared memory? > > > > Marc G. Fournier > > Systems Administrator @ hub.org > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > Why not simply append history lines to a normal log file ? In this way you > don't have the overhead for accessing tables and can do real-time processing > of the data with a simple tail -f on the file. > I use this trick to monitor the log file written by 30 backends and it works > fine for me. I agree. We have more important items to address. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
> > On Thu, 4 Jun 1998, David Gould wrote: > > > > > Oh, ok. Some suggestions have been made the the postmaster would open a > > > connection to it's own backend to do queries. I was responding to this. > > > I agree that we should just store the information in shared memory. > > > > How does one get a history for long term monitoring and statistics > > by storing in shared memory? > > > > Marc G. Fournier > > Systems Administrator @ hub.org > > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org > > Why not simply append history lines to a normal log file ? In this way you > don't have the overhead for accessing tables and can do real-time processing > of the data with a simple tail -f on the file. > I use this trick to monitor the log file written by 30 backends and it works > fine for me. > > -- > Massimo Dal Zotto I was going to suggest this too, but didn't want to be too much of a spoilsport. -dg David Gould dg@illustra.com 510.628.3783 or 510.305.9468 Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612 "Don't worry about people stealing your ideas. If your ideas are any good, you'll have to ram them down people's throats." -- Howard Aiken