Re: [GENERAL] huge table occupation after updates

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: [GENERAL] huge table occupation after updates
Дата
Msg-id f302a912-5139-90db-010d-01e579c5b5e2@aklaver.com
обсуждение исходный текст
Ответ на Re: [GENERAL] huge table occupation after updates  (Francisco Olarte <folarte@peoplecall.com>)
Список pgsql-general
On 12/10/2016 09:30 AM, Francisco Olarte wrote:
> A couple of things first.
>
> 1.- This list encourages inline replying, editing the text, and frowns
> upon top posting.
>
> 2.- Your HTML formatting with so a small size makes it harder for me (
> and I can assume some others ) to properly read your messages.

+1. I either had to Ctrl + or put the 'readers' on:)

>
> If you want to discourage people replying to you, keep doing the two above.
>
> On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
>> you're right, VACUUM FULL  recovered the space, completely.
>
> Well, it always does. ;-)
>
>> So, at this point I'm worried about my needs.
>> I cannot issue vacuum full as I read it locks the table.
>
> Well, first hint of your needs. Bear in mind vacuum fulls can be very
> fast on small tables ( i.e, if you have the 1.5Mb table, and do 2000
> updates and then a vacuum full that will be very fast, time it ).
>
>> In my DB, I (would) need to have a table with one bigint id field+ 10 bytea
>> fields, 100 bytes long each (more or less, not fixed).
>> 5/10000 rows maximum, but let's say 5000.
>> As traffic I can suppose 10000 updates per row per day (spread over groups
>> of hours; each update involving two of those fields, randomly.
>> Also rows are chosen randomly (in my test I used a block of 2000 just to try
>> one possibility).
>> So, it's a total of 50 millions updates per day, hence (50millions * 100
>> bytes *2 fields updated) 10Gbytes net per day.
>
> Not at all. That's the volume of updated data, you must multiply by
> the ROW size, not just the changed size, in your case 50M * 1100 ( to
> have some wiggle room ), 55Gbytes.
>
> But this is the UPPER BOUND you asked for. Not the real one.
>
>> I'm afraid it's not possible, according to my results.
>
> It certaninly is. You can set a very aggresive autovacuum schedule for
> the table, or even better, you may vacuum AFTER each hourly update.
> This will mark dead tuples for reuse. It will not be as fast, but it
> can certainly be fast enough.
>
> And, if you only update once an hour, you may try other tricks ( like
> copy to a temp table, truncate the original and insert the temp table
> in the original, although I fear this will lock the table too, but it
> will be a very short time, your readers may well tolerate it. )
>
> Yours seem a special app with special need, try a few, measure, it is
> certainly possible.
>
> Francisco Olarte.
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Tom DalPozzo
Дата:
Сообщение: Re: [GENERAL] huge table occupation after updates
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: [GENERAL] huge table occupation after updates