Re: Less than ideal error reporting in pg_stat_statements

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: Less than ideal error reporting in pg_stat_statements
Дата
Msg-id CAM3SWZTOjkP9EJsaUcvhpHxXfdCyXbCcFuTDNT4Qct90ZMU8Hw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Less than ideal error reporting in pg_stat_statements  (Peter Geoghegan <pg@heroku.com>)
Ответы Re: Less than ideal error reporting in pg_stat_statements  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Re: Less than ideal error reporting in pg_stat_statements  (Peter Geoghegan <pg@heroku.com>)
Re: Less than ideal error reporting in pg_stat_statements  (Peter Geoghegan <pg@heroku.com>)
Список pgsql-hackers
On Tue, Sep 22, 2015 at 5:01 PM, Peter Geoghegan <pg@heroku.com> wrote:
> My guess is that this very large query involved a very large number of
> constants, possibly contained inside an " IN ( )". Slight variants of
> the same query, that a human would probably consider to be equivalent
> have caused artificial pressure on garbage collection.

I could write a patch to do compaction in-place. The basic idea is
that there'd be a slow path in the event of an OOM-like condition
(i.e. an actual OOM, or when the MaxAllocSize limitation is violated)
that first scans through entries, and determines the exact required
buffer size for every non-garbage query text. As this
iteration/scanning occurs, the entries' offsets in shared memory are
rewritten assuming that the first entry starts at 0, the second at 0 +
length of first + 1 (for NUL sentinal byte), and so on. We then
allocate a minimal buffer, lseek() and copy into the buffer, so that
the expectation of finding query texts at those offsets is actually
met. Finally, unlink() old file, create new one, and write new buffer
out. I think I wanted to do things that way originally.

If even that exact, minimal buffer size cannot be allocated, then ISTM
that the user is out of luck. That will be very rare in practice, but
should it occur we log the issue and give up on storing query texts
entirely, so as to avoid thrashing while still giving the user
something to go on. This new code path is never hit until a garbage
collection is required, so hopefully the garbage created was not a
pathological issue with a weird workload, but rather something that
will not recur for a very long time.

That seems to me to be better than getting into the business of
deciding how long of a query text is too long.

I'm doubtful that this had anything to do with MaxAllocSize. You'd
certainly need a lot of bloat to be affected by that in any way. I
wonder how high pg_stat_statements.max was set to on this system, and
how long each query text was on average.

-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: unclear about row-level security USING vs. CHECK
Следующее
От: Kouhei Kaigai
Дата:
Сообщение: Re: Parallel Seq Scan