vacuum problems

Поиск
Список
Период
Сортировка
От Mark
Тема vacuum problems
Дата
Msg-id 01071109160500.14673@mark.ldssingles.com
обсуждение исходный текст
Ответ на Re: SOMAXCONN (was Re: Solaris source code)  (ncm@zembu.com (Nathan Myers))
Ответы Re: vacuum problems  (Mark <mark@ldssingles.com>)
Список pgsql-hackers
Quick rundown of our configuration:
Red Hat 7.1 (no changes or extras added by us)
Postgresql 7.1.2 and CVS HEAD from 07/10/2001
3.8 gb database size

I included two pgsql versions because this happens on both.

Here's the problem we're having:

We run a vacuumdb from the server on the entire database.  Some large tables 
are vacuumed very quickly, but the vacuum process hangs or takes more than a 
few hours on a specific table (we haven't let it finish before).  The vacuum 
process works quickly on a table (loginhistory) with 2.8 million records, but 
is extremely slow on a table (inbox) with 1.1 million records (the table with 
1.1 million records is actually larger in kb size than the other table).

We've tried to vacuum the inbox table seperately ('vacuum inbox' within 
psql), but this still takes hours (again we have never let it complete, we 
need to use the database for development as well).

We noticed 2 things that are significant to this situatoin:
The server logs the following:


DEBUG:  --Relation msginbox--
DEBUG:  Pages 129921: Changed 26735, reaped 85786, Empty 0, New 0; Tup 
1129861: Vac 560327, Keep/VTL 0/0, Crash 0, UnUsed 51549, MinLen 100,
MaxLen 2032; Re-using: Free/Avail. Space 359061488/359059332;
EndEmpty/Avail. Pages 0/85785. CPU 11.18s/5.32u sec.
DEBUG:  Index msginbox_pkey: Pages 4749; Tuples 1129861: Deleted 76360.
CPU 0.47s/6.70u sec.
DEBUG:  Index msginbox_fromto: Pages 5978; Tuples 1129861: Deleted 0.
CPU 0.37s/6.15u sec.
DEBUG:  Index msginbox_search: Pages 4536; Tuples 1129861: Deleted 0.
CPU 0.32s/6.30u sec.
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES
DEBUG:  XLogWrite: new log file created - consider increasing WAL_FILES

the last few lines (XLogWrite .....) repeat for ever and ever and ever.  With 
7.1.2 this never stops unless we run out of disk space or cancel the query.  
With CVS HEAD this still continues, but the log files don't consume all disk 
space, but we still have to cancel it or it might run forever.

Perhaps we need to let it run until it completes, but we thought that we 
might be doing something wrong or have some data (we're converting data from 
MS SQL Server) that isn't friendly.

The major issue we're facing with this is that any read or write access to 
the table being vacuumed times out (obviously because the table is still 
locked).  We plan to use PostgreSQL in our production service, but we can't 
until we get this resolved.

We're at a loss, not being familiar enough with PostgreSQL and it's source 
code.  Can anyone please offer some advice or suggestions?

Thanks,

Mark


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Re: SOMAXCONN (was Re: Solaris source code)
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Re: SOMAXCONN (was Re: Solaris source code)