Re: drop/truncate table sucks for large values of shared buffers

Поиск
Список
Период
Сортировка
Amit Kapila <amit.kapila16@gmail.com> writes:
> I have looked into it and found that the main reason for such
> a behaviour is that for those operations it traverses whole
> shared_buffers and it seems to me that we don't need that
> especially for not-so-big tables.  We can optimize that path
> by looking into buff mapping table for the pages that exist in
> shared_buffers for the case when table size is less than some
> threshold (say 25%) of shared buffers.

I don't like this too much because it will fail badly if the caller
is wrong about the maximum possible page number for the table, which
seems not exactly far-fetched.  (For instance, remember those kernel bugs
we've seen that cause lseek to lie about the EOF position?)  It also
offers no hope of a fix for the other operations that scan the whole
buffer pool, such as DROP TABLESPACE and DROP DATABASE.

In the past we've speculated about fixing the performance of these things
by complicating the buffer lookup mechanism enough so that it could do
"find any page for this table/tablespace/database" efficiently.
Nobody's had ideas that seemed sane performance-wise though.
        regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Semantics of pg_file_settings view
Следующее
От: Tom Lane
Дата:
Сообщение: Bogus postmaster-only contexts laying about in backends