Re: Re: [HACKERS] Re: [QUESTIONS] Business cases

Поиск
Список
Период
Сортировка
От Mattias Kregert
Тема Re: Re: [HACKERS] Re: [QUESTIONS] Business cases
Дата
Msg-id 34C34E4C.3440D5B0@algonet.se
обсуждение исходный текст
Ответ на Re: [QUESTIONS] Business cases  (Tom <tom@sdf.com>)
Ответы Re: Re: [HACKERS] Re: [QUESTIONS] Business cases  (Bruce Momjian <maillist@candle.pha.pa.us>)
Список pgsql-hackers
Tom wrote:
> > >   How are large users handling the vacuum problem?  vaccuum locks other
> > > users out of tables too long.  I don't need a lot performance (a few per
> > > minutes), but I need to be handle queries non-stop).
> >
> >       Not sure, but this one is about the only major thing that is continuing
> > to bother me :(  Is there any method of improving this?
>
>   vacuum seems to do a _lot_ of stuff.  It seems that crash recovery
> features, and maintenance features should be separated.  I believe the
> only required maintenance features are recovering space used by deleted
> tuples and updating stats?  Both of these shouldn't need to lock the
> database for long periods of time.

Would it be possible to add an option to VACUUM, like a max number
of blocks to sweep? Or is this impossible because of the way PG works?

Would it be possible to (for example) compact data from the front of
the file to make one block free somewhere near the beginning of the
file and then move rows from the last block to this new, empty block?

-- To limit the number of rows to compact:
psql=> VACUUM MoveMax 1000; -- move max 1000 rows

-- To limit the time used for vacuuming:
psql=> VACUUM MaxSweep 1000; -- Sweep max 1000 blocks

Could this work with the current method of updating statistics?


*** Btw, why doesn't PG update statistics when inserting/updating?


/* m */

В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Vadim B. Mikheev"
Дата:
Сообщение: Re: subselects
Следующее
От: jwieck@debis.com (Jan Wieck)
Дата:
Сообщение: Re: [HACKERS] *Major* Patch for PL