Almost happy ending (from "Data files became huge with no apparent reason" thread)

Поиск
Список
Период
Сортировка
От dfumagalli@tin.it
Тема Almost happy ending (from "Data files became huge with no apparent reason" thread)
Дата
Msg-id 3D532F8A0000AA98@ims2f.cp.tin.it
обсуждение исходный текст
Ответы Re: Almost happy ending (from "Data files became huge with  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-general
Hi all,

Al last I recieved a new disk and was able to VACUUM ANALYZE.

The repaired database from the initial 500MB became 185 MB.
I'm very happy that the combined disk full + unexpected server shutdown
didn't suffice to crash the data beyond recovery.

The next thing I did was to further reduce this rescued database. In fact,
the original data files took only 26 MB, not 185.

I checked the objects file size (by looking at pg_class -> it's a bit difficult
with those "numeric table names") and quicky found that I had some 150 MB
of index files!

My first try was to go stand alone and to reindex force the entire database,
but the rebuilt indexes still took the same space (I thought the reindex
would rebuild them from scratch, freeing unused index pages, but perhaps
I'm wrong, at least for PostgreSQL 7.1.1).

So I dropped all non-primary key indexes (they were auto-created as *_pkey)
and recreated them. I did not drop primary keys because I fear it could
break something about constraints. Now the database is 47MB and I'm pretty
happy with it. I only I could safely drop and recreate the primary keys
I'm sure the database would return to its original 26 MB.

Now, three simple questions to the pros on this mailing list:
- Is there a cleaner way of recreating packed indexes?
- May I drop and recreate primary indexes without breaking rules, triggers,
constraints and sequences?
- We are evaluating PostgreSQL for a mission critical application, currently
managed by an Informix IDS 2000 server on a Sun Ultra 420 with Solaris 8
with raid disks and SAN for storage. Currently there are about 1000 concurrent
users (each with one to three ODBC sessions), with a CPU load of about 15%.
There are 30 databases: some have only some thousand records, while others
have a thousand tables with some of them totalling 10M+ records, with need
of sub-selects, joins and transactions (no constraints for now). Is PostgreSQL
able to manage *RELIABLY* and 24/7/365 this amount of data? Does the newest
version come with some handy tool like the Informix Onstat (it reports users
sessions and executed queries given a connection id)? Does it use all the
available processors or only one? How may we work around the periodic need
of a VACUUM on such big tables (and they are vastly modified every day),
without disrupting the server availability?

Best regards,
Dario Fumagalli



В списке pgsql-general по дате отправления:

Предыдущее
От: wlj
Дата:
Сообщение: about nowait
Следующее
От: "Jerome Chochon"
Дата:
Сообщение: Triggers and Rules