Re: truncate a table instead of vaccum full when count(*) is 0
От | Heikki Linnakangas |
---|---|
Тема | Re: truncate a table instead of vaccum full when count(*) is 0 |
Дата | |
Msg-id | 464052DB.30206@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: truncate a table instead of vaccum full when count(*) is 0 (Pomarede Nicolas <npomarede@corp.free.fr>) |
Ответы |
Re: truncate a table instead of vaccum full when count(*)
is 0
(Pomarede Nicolas <npomarede@corp.free.fr>)
|
Список | pgsql-performance |
Pomarede Nicolas wrote: > On Tue, 8 May 2007, Heikki Linnakangas wrote: >> Pomarede Nicolas wrote: >>> But for the data (dead rows), even running a vacuum analyze every day >>> is not enough, and doesn't truncate some empty pages at the end, so >>> the data size remains in the order of 200-300 MB, when only a few >>> effective rows are there. >> >> For a table like that you should run VACUUM much more often than once >> a day. Turn on autovacuum, or set up a cron script etc. to run it >> every 15 minutes or so. > > Yes, I already do this on another spool table ; I run a vacuum after > processing it, but I wondered if there was another way to keep the disk > size low for this table. How much concurrent activity is there in the database? Running a vacuum right after processing it would not remove the deleted tuples if there's another transaction running at the same time. Running the vacuum a few minutes later might help with that. You should run VACUUM VERBOSE to see how many non-removable dead tuples there is. >>> Is there an easy way to do this under psql ? For example, lock the >>> table, do a count(*), if result is 0 row then truncate the table, >>> unlock the table (a kind of atomic 'truncate table if count(*) == 0'). >>> >>> Would this work and what would be the steps ? >> >> It should work, just like you describe it, with the caveat that >> TRUNCATE will remove any old row versions that might still be visible >> to an older transaction running in serializable mode. It sounds like >> it's not a problem in your scenario, but it's hard to say for sure >> without seeing the application. Running vacuum more often is probably >> a simpler and better solution, anyway. > > Shouldn't locking the table prevent this ? I mean, if I try to get an > exclusive lock on the table, shouldn't I get one only when there's no > older transaction, and in that case I can truncate the table safely, > knowing that no one is accessing it due to the lock ? Serializable transactions that started before the transaction that takes the lock would need to see the old row versions: Xact 1: BEGIN ISOLATION LEVEL SERIALIZABLE; Xact 1: SELECT 1; -- To take a snapshot, perform any query Xact 2: DELETE FROM foo; Xact 3: BEGIN; Xact 3: LOCK TABLE foo; Xact 3: SELECT COUNT(*) FROM foo; -- Sees delete by xact 2, returns 0, Xact 3: TRUNCATE foo; Xact 3: COMMIT; Xact 1: SELECT COUNT(*) FROM foo; -- Returns 0, but because the transaction is in serializable mode, it should've still seen the rows deleted by xact 2. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-performance по дате отправления: