Re: PATCH: Split stats file per database WAS: autovacuum stress-testing our system

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: PATCH: Split stats file per database WAS: autovacuum stress-testing our system
Дата
Msg-id 51227642.8080200@fuzzy.cz
обсуждение исходный текст
Ответ на Re: PATCH: Split stats file per database WAS: autovacuum stress-testing our system  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Ответы Re: PATCH: Split stats file per database WAS: autovacuum stress-testing our system  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Список pgsql-hackers
On 18.2.2013 16:50, Alvaro Herrera wrote:
> Tomas Vondra wrote:
> 
>> So, here's v10 of the patch (based on the v9+v9a), that implements the
>> approach described above.
>>
>> It turned out to be much easier than I expected (basically just a
>> rewrite of the pgstat_read_db_statsfile_timestamp() function.
> 
> Thanks.  I'm giving this another look now.  I think the new code means
> we no longer need the first_write logic; just let the collector idle
> until we get the first request.  (If for some reason we considered that
> we should really be publishing initial stats as early as possible, we
> could just do a write_statsfiles(allDbs) call before entering the main
> loop.  But I don't see any reason to do this.  If you do, please speak
> up.)
> 
> Also, it seems to me that the new pgstat_db_requested() logic is
> slightly bogus (in the "inefficient" sense, not the "incorrect" sense):
> we should be comparing the timestamp of the request vs.  what's already
> on disk instead of blindly returning true if the list is nonempty.  If
> the request is older than the file, we don't need to write anything and
> can discard the request.  For example, suppose that backend A sends a
> request for a DB; we write the file.  If then quickly backend B also
> requests stats for the same DB, with the current logic we'd go write the
> file, but perhaps backend B would be fine with the file we already
> wrote.

Hmmm, you're probably right.

> Another point is that I think there's a better way to handle nonexistant
> files, instead of having to read the global file and all the DB records
> to find the one we want.  Just try to read the database file, and only
> if that fails try to read the global file and compare the timestamp (so
> there might be two timestamps for each DB, one in the global file and
> one in the DB-specific file.  I don't think this is a problem).  The
> point is avoid having to read the global file if possible.

I don't think that's a good idea. Keeping the timestamps at one place is
a significant simplification, and I don't think it's worth the
additional complexity. And the overhead is minimal.

So my vote on this change is -1.

> 
> So here's v11.  I intend to commit this shortly.  (I wanted to get it
> out before lunch, but I introduced a silly bug that took me a bit to
> fix.)

;-)

Tomas



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Heikki Linnakangas
Дата:
Сообщение: Re: 9.2.3 crashes during archive recovery
Следующее
От: Noah Misch
Дата:
Сообщение: Re: Materialized views WIP patch