Re: reducing statistics write overhead
| От | Euler Taveira de Oliveira |
|---|---|
| Тема | Re: reducing statistics write overhead |
| Дата | |
| Msg-id | 49794753.90902@timbira.com обсуждение исходный текст |
| Ответ на | Re: reducing statistics write overhead (Alvaro Herrera <alvherre@commandprompt.com>) |
| Ответы |
Re: reducing statistics write overhead
|
| Список | pgsql-hackers |
Alvaro Herrera escreveu:
> Euler Taveira de Oliveira escribió:
>> Alvaro Herrera escreveu:
>>> This could be solved if the workers kept the whole history of tables
>>> that they have vacuumed. Currently we keep only a single table (the one
>>> being vacuumed right now). I proposed writing these history files back
>>> when workers were first implemented, but the idea was shot down before
>>> flying very far because it was way too complex (the rest of the patch
>>> was more than complex enough.) Maybe we can implement this now.
>>>
>> [I don't remember your proposal...] Isn't it just add a circular linked list
>> at AutoVacuumShmemStruct? Of course some lock mechanism needs to exist to
>> guarantee that we don't write at the same time. The size of this linked list
>> would be scale by a startup-time-guc or a reasonable fixed value.
>
> Well, the problem is precisely how to size the list. I don't like the
> idea of keeping an arbitrary number in memory; it adds another
> mostly-useless tunable that we'll need to answer questions about for all
> eternity.
>
[Poking the code a little...] You're right. We could do that but it isn't an
elegant solution. What about tracking that information at table_oids?
struct table_oids {bool skipit; /* initially false */Oid relid;
};
-- Euler Taveira de Oliveira http://www.timbira.com/
В списке pgsql-hackers по дате отправления: