Re: PGC_SIGHUP shared_buffers?
От | Robert Haas |
---|---|
Тема | Re: PGC_SIGHUP shared_buffers? |
Дата | |
Msg-id | CA+TgmoZpETxNzfFwc-Q-LwR76KLX==JKYrnAGO-q7pJe4GcJsw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: PGC_SIGHUP shared_buffers? (Andres Freund <andres@anarazel.de>) |
Список | pgsql-hackers |
On Mon, Feb 19, 2024 at 2:05 AM Andres Freund <andres@anarazel.de> wrote: > We probably should address that independently of making shared_buffers > PGC_SIGHUP. The queue gets absurdly large once s_b hits a few GB. It's not > that much memory compared to the buffer blocks themselves, but a sync queue of > many millions of entries just doesn't make sense. And a few hundred MB for > that isn't nothing either, even if it's just a fraction of the space for the > buffers. It makes checkpointer more susceptible to OOM as well, because > AbsorbSyncRequests() allocates an array to copy all requests into local > memory. Sure, that could just be capped, if it makes sense. Although given the thrust of this discussion, it might be even better to couple it to something other than the size of shared_buffers. > I'd say the vast majority of postgres instances in production run with less > than 1GB of s_b. Just because numbers wise the majority of instances are > running on small VMs and/or many PG instances are running on one larger > machine. There are a lot of instances where the total available memory is > less than 2GB. Whoa. That is not my experience at all. If I've ever seen such a small system since working at EDB (since 2010!) it was just one where the initdb-time default was never changed. I can't help wondering if we should have some kind of memory_model GUC, measured in T-shirt sizes or something. We've coupled a bunch of things to shared_buffers mostly as a way of distinguishing small systems from large ones. But if we want to make shared_buffers dynamically changeable and we don't want to make all that other stuff dynamically changeable, decoupling those calculations might be an important thing to do. On a really small system, do we even need the ability to dynamically change shared_buffers at all? If we do, then I suspect the granule needs to be small. But does someone want to take a system with <1GB of shared_buffers and then scale it way, way up? I suppose it would be nice to have the option. But you might have to make some choices, like pick either a 16MB granule or a 128MB granule or a 1GB granule at startup time and then stick with it? I don't know, I'm just spitballing here, because I don't know what the real design is going to look like yet. > > Don't you have to still move buffers entirely out of the region you > > want to unmap? > > Sure. But you can unmap at the granularity of a hardware page (there is some > fragmentation cost on the OS / hardware page table level > though). Theoretically you could unmap individual 8kB pages. I thought there were problems, at least on some operating systems, if the address space mappings became too fragmented. At least, I wouldn't expect that you could use huge pages for shared_buffers and still unmap little tiny bits. How would that even work? -- Robert Haas EDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: