Re: Data files became huge with no apparent reason

Поиск
Список
Период
Сортировка
От Jochem van Dieten
Тема Re: Data files became huge with no apparent reason
Дата
Msg-id 3D6CF6A6.3090308@oli.tudelft.nl
обсуждение исходный текст
Ответ на Data files became huge with no apparent reason  (Dario Fumagalli <dfumagalli@tin.it>)
Список pgsql-general
Dario Fumagalli wrote:
>
> Now, the questions are:
> - How is it possible this exceptional growth (N.B. the database server
> has all the default values set in its configuration files - i.e. was not
> "optimized")?

Too many updates without vacuuming.


> - More urgently: why now it takes 30 seconds to perform a select
> count(*) on 900 records (no other clients connected), while in the past
> it was almost instantaneous?

Because a count uses a full table scan, on a table that is now pretty
large on the disk.


> - Why do the backend crashed immediately if I try to VACUUM (ANALYZE)
> it? I think the backend claims additional disk space for this operation
> and fills the disk again, but I'm not sure.
> - And last, but not least... is it possible to restore the situation
> without loosing data (backup is 3 weeks old)? I'm able to start the
> daemon and perform SQL operations, but I don't know how to make
> PostgreSQL release the disk space after I dumped the database in order
> to reload it. And I fear an InitDB will destroy db users and their
> privileges.

What I would do is:
1 Backup.
2 Drop any index that is not unique.
3 "VACUUM VERBOSE tablename" on the smallest table. Try the next
smallest table etc. If you are using PostgreSQL 7.2.x use "VACUUM FULL
VERBOSE tablename"

In the end you either have vacuumed them all and can restore your
indexes or you have an error to post here.

Jochem


В списке pgsql-general по дате отправления:

Предыдущее
От: Kevin Brannen
Дата:
Сообщение: Re: [SQL] Retrieving the new "nextval" for primary keys....
Следующее
От: "scott.marlowe"
Дата:
Сообщение: Re: Noobie Questions...