Обсуждение: regression database

Поиск
Список
Период
Сортировка

regression database

От
kris pal
Дата:
Hello,
 
I inherited a production Postgres database server which has a "Regression" database. The problem is that the size of the objects with in the Regression database are increasing so much that the database has grown into several GB.  (Ex: pg_attribute and its index have grown into 2GB each and so on with few other objects as well).
 
I am not sure what the implications of completely deleting the 'Regression' database (I don't need the regression testing). I found the documentation on how to do the regression tests but  can you tell me if there is a documentation of how to take  care of these space issues for "Regression" database?  Or is there a way to reduce the space used by 'Regression' database?
 
I would appreciate your help in resolving this issue.
 
thanks,
Kris.
 
 


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.

Re: regression database

От
Tom Lane
Дата:
kris pal <kristhegambler@yahoo.com> writes:
> I am not sure what the implications of completely deleting the
> 'Regression' database (I don't need the regression testing).

Is it named "Regression" with a capital R?  If so, that's not the DB that
is made by the standard regression tests --- that is called "regression".

There is not any reason not to drop the standard regression database, if
you're not using it.  But there is no way that that DB would get that
large unless something in your installation is using it.  I'd suggest
you find out what.

            regards, tom lane

Re: regression database

От
kris pal
Дата:
Hi tom,
 
It is actually named 'regression'. I spoke with the person who build the server but apparently he is not well worsed in Postgres. He didn't explicitly create 'regression', I guess he selected some default installation options.
 
I am not sure how to find out what is causing the 'regression' database to increase in size. Its size is not due to the tables with in the 'regression' but due to the system tables like "pg_attribute" with in 'regression' database. If you chould point to a resource which would help me understand how this 'regression' database is handled internally (what activity writes to this database) that would be great.
 
 
Pls see below, the directory 2034862 corresponds to 'regression':
 
____________________________________________________
 
[postgres@dds base]$ du -sk /home/postgres/data/base/2034862
6236920 /home/postgres/data/base/2034862
[postgres@dds base]$
[postgres@dds base]$ cd /home/postgres/data/base/2034862
[postgres@dds 2034862]$ du -sk * | awk '{if ($1 > 512000)  print $1" --> "$2}'
1049604 --> 1249    (over 1GB)
949484 --> 1249.1   (abt 1GB)
1049604 --> 16609
821104 --> 16609.1
The relname for the corresponding relfilenode:
--------------+-------------
 pg_attribute |        1249
 pg_attribute_relid_attnam_index |       16609
--------------+-------------
____________________________________________________
 
 
So am not sure why this thing is growing progressively.
Any feedback/ help is greatly appreciated.
 
thanks,
Kris Pal.


Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> I am not sure what the implications of completely deleting the
> 'Regression' database (I don't need the regression testing).

Is it named "Regression" with a capital R? If so, that's not the DB that
is made by the standard regression tests --- that is called "regression".

There is not any reason not to drop the standard regression database, if
you're not using it. But there is no way that that DB would get that
large unless something in your installation is using it. I'd suggest
you find out what.

regards, tom lane


Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!

Re: regression database

От
kris pal
Дата:
 
Actually I just noticed something. Previously for some reason, I thought the data directory corresponded to 'regression'. But I double checked it now with oid2name and it corresponds to our main database say "dds".
 
But the issue still remains. In the sense that pg_attribute etc are still so huge and I have no idea why. This table is not the one we created in the schema.
 
thanks,
Kris Pal.


kris pal <kristhegambler@yahoo.com> wrote:
Hi tom,
 
It is actually named 'regression'. I spoke with the person who build the server but apparently he is not well worsed in Postgres. He didn't explicitly create 'regression', I guess he selected some default installation options.
 
I am not sure how to find out what is causing the 'regression' database to increase in size. Its size is not due to the tables with in the 'regression' but due to the system tables like "pg_attribute" with in 'regression' database. If you chould point to a resource which would help me understand how this 'regression' database is handled internally (what activity writes to this database) that would be great.
 
 
Pls see below, the directory 2034862 corresponds to 'regression':
 
____________________________________________________
 
[postgres@dds base]$ du -sk /home/postgres/data/base/2034862
6236920 /home/postgres/data/base/2034862
[postgres@dds base]$
[postgres@dds base]$ cd /home/postgres/data/base/2034862
[postgres@dds 2034862]$ du -sk * | awk '{if ($1 > 512000)  print $1" --> "$2}'
1049604 --> 1249    (over 1GB)
949484 --> 1249.1   (abt 1GB)
1049604 --> 16609
821104 --> 16609.1
The relname for the corresponding relfilenode:
--------------+-------------
 pg_attribute |        1249
 pg_attribute_relid_attnam_index |       16609
--------------+-------------
____________________________________________________
 
 
So am not sure why this thing is growing progressively.
Any feedback/ help is greatly appreciated.
 
thanks,
Kris Pal.


Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> I am not sure what the implications of completely deleting the
> 'Regression' database (I don't need the regression testing).

Is it named "Regression" with a capital R? If so, that's not the DB that
is made by the standard regression tests --- that is called "regression".

There is not any reason not to drop the standard regression database, if
you're not using it. But there is no way that that DB would get that
large unless something in your installation is using it. I'd suggest
you find out what.

regards, tom lane


Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!


Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.

Re: regression database

От
Tom Lane
Дата:
kris pal <kristhegambler@yahoo.com> writes:
> Actually I just noticed something. Previously for some reason, I
> thought the data directory corresponded to 'regression'. But I double
> checked it now with oid2name and it corresponds to our main database
> say "dds".

Ah, that makes more sense.

> But the issue still remains. In the sense that pg_attribute etc are
> still so huge and I have no idea why.

Probably because you've created and deleted a lot of tables (does your
app use temp tables a lot?).  If you aren't good about vacuuming the
system catalogs then you're going to get table bloat.  If you're using
something pre-7.4 then you may get index bloat even if you *are*
vacuuming regularly :-(.

My suggestions are to update to 7.4, if you're not there already
(if you are, you may need to do VACUUM FULL and REINDEX to get rid
of the bloat); then adopt a stricter regime of routine vacuuming.
And check your FSM settings.

            regards, tom lane

Re: vacuum pg_attribute causes high load

От
kris pal
Дата:
 
The server is running Postgres 7.4.1 on Linux.
I tried to do a "vacuum pg_attribute" but its taking a while (I guess because of the size- abt 2GB) and the load on the server is going very high. Because of this the database server is not responding. So I killed the psql session.
 
1) Can you tell me if there is any other work around to vacuum the pg_attribute system table ? Like doing it in parts so that it won't cause a high load.
 
2)
 
Also I want to REINDEX by doing:
----------------------------------------------------------------------------
drop index pg_attribute_relid_attnam_index;
vacuum pg_attribute;
create the index again;
vacuum analyze pg_attribute;
----------------------------------------------------------------------------
 
But I get the : "ERROR:  permission denied: "pg_attribute_relid_attnam_index" is a system catalog"
 
* So how can I REINDEX this one ?
 
3) The config file has:

#max_fsm_pages = 20000          # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000       # min 100, ~50 bytes each
 
So I guess Postgres is using the defaults. Where can I find the current values or default values. Do you know if I can find some reading material abt FSM settings and default values?
 
thanks,
Kris Pal.
 


Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> Actually I just noticed something. Previously for some reason, I
> thought the data directory corresponded to 'regression'. But I double
> checked it now with oid2name and it corresponds to our main database
> say "dds".

Ah, that makes more sense.

> But the issue still remains. In the sense that pg_attribute etc are
> still so huge and I have no idea why.

Probably because you've created and deleted a lot of tables (does your
app use temp tables a lot?). If you aren't good about vacuuming the
system catalogs then you're going to get table bloat. If you're using
something pre-7.4 then you may get index bloat even if you *are*
vacuuming regularly :-(.

My suggestions are to update to 7.4, if you're not there already
(if you are, you may need to do VACUUM FULL an d REINDEX to get rid
of the bloat); then adopt a stricter regime of routine vacuuming.
And check your FSM settings.

regards, tom lane

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Re: vacuum pg_attribute causes high load

От
Tom Lane
Дата:
kris pal <kristhegambler@yahoo.com> writes:
> * So how can I REINDEX this one ?

Read the REINDEX man page.  There are special hoops you have to jump
through when reindexing system catalogs (fewer in more recent versions,
but I forget what you are running.  Anyway be sure to read the manual
*for your version*).

            regards, tom lane

Re: vacuum pg_attribute causes high load

От
Chris Browne
Дата:
kristhegambler@yahoo.com (kris pal) writes:
> The server is running Postgres 7.4.1 on Linux.
>
> I tried to do a "vacuum pg_attribute" but its taking a while (I guess because of the size- abt 2GB) and
> the load on the server is going very high. Because of this the database server is not responding. So I
> killed the psql session.
>
> 1) Can you tell me if there is any other work around to vacuum the
> pg_attribute system table ? Like doing it in parts so that it won't
> cause a high load.

No, you need to let the vacuum finish.

If you stop it early, it'll just increase system load and accomplish
_nothing_.

> 2)
>
>  
>
> Also I want to REINDEX by doing:
>
> ----------------------------------------------------------------------------
>
> drop index pg_attribute_relid_attnam_index;
>
> vacuum pg_attribute;
>
> create the index again;
>
> vacuum analyze pg_attribute;
>
> ----------------------------------------------------------------------------
>
> But I get the : "ERROR:  permission denied: "pg_attribute_relid_attnam_index" is a system catalog"
>
> * So how can I REINDEX this one ?

In order to reindex system tables, you must shut the database down and
go in in single user mode.

<http://www.postgresql.org/docs/current/static/app-postgres.html>

That's a reasonable time to do a VACUUM FULL on pg_attribute; you can
be 100% certain that it won't interfere with anything else.

> 3) The config file has:
>
> #max_fsm_pages = 20000          # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1000       # min 100, ~50 bytes each
>
> So I guess Postgres is using the defaults. Where can I find the
> current values or default values. Do you know if I can find some
> reading material abt FSM settings and default values?

At the end of a VACUUM on everything, FSM information is reported, thus:

INFO:  free space map: 605 relations, 2039 pages stored; 10896 total pages needed
DETAIL:  Allocated FSM size: 1000 relations + 30000 pages = 237 kB shared memory.
VACUUM

The "10896 total pages needed" indicates how much the vacuum needed;
my database evidently has things set reasonably appropriately.
--
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/oses.html
"I don't do drugs anymore 'cause I  find I get the same effect just by
standing up really fast." -- Jonathan Katz

Re: vacuum pg_attribute causes high load

От
kris pal
Дата:
 
>>>>>>>>>>>>>>>>>>>>>>>>>
>> 1) Can you tell me if there is any other work around to vacuum the
>> pg_attribute system table ? Like doing it in parts so that it won't
> >cause a high load.

Chris Browne <cbbrowne@acm.org> wrote:
>No, you need to let the vacuum finish.
>If you stop it early, it'll just increase system load and accomplish _nothing_.
>>>>>>>>>>>>>>>>>>>>>>>>>
 
But while doing vacuum the load is getting so high that the system is almost freezing.
Does the performance of 'Vacuum' command depend on other system/ database parameters?  The 'vacuum pg_attribute" command never completes, its causing high load on the server. I had to wait for 15 mins and kill it.  Because the system load was touching 10, slowing down the system therefore not letting anyone access the server.
 
Can we vacuum the "pg_attribiute" in parts? So that 'vacuum' will complete faster and there won't be load issues.

Thanks Tom Lane and Chris Browne for your help so far.
 
-Krishna.


Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.

Re: vacuum pg_attribute causes high load

От
Christopher Browne
Дата:
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote:
>  
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>> 1) Can you tell me if there is any other work around to vacuum the
>>> pg_attribute system table ? Like doing it in parts so that it won't
>> >cause a high load.
>
> Chris Browne <cbbrowne@acm.org> wrote:
>>No, you need to let the vacuum finish.
>>If you stop it early, it'll just increase system load and accomplish _nothing_.
>
>>>>>>>>>>>>>>>>>>>>>>>>>>

> But while doing vacuum the load is getting so high that the system
> is almost freezing.

Right.  You said that the table was consuming 2GB of space.  It has to
read in that 2GB, and discard the bits that aren't necessary.

> Does the performance of 'Vacuum' command depend on other system/
> database parameters?  The 'vacuum pg_attribute" command never
> completes, its causing high load on the server. I had to wait for 15
> mins and kill it.  Because the system load was touching 10, slowing
> down the system therefore not letting anyone access the server.

What's happening, is, at root, that the vacuum process loads all the
data in the table (2GB, you indicated) into memory, looks at it, and
writes it back out.

That presumably leads to about 2GB of reads and 2GB of writes, which
takes a while.

If you want this to go as quickly as possible, shut the database down
and start the postmaster in single user mode so that there's no other
activity competing for the system's resources.

> Can we vacuum the "pg_attribiute" in parts? So that 'vacuum' will
> complete faster and there won't be load issues.  Thanks Tom Lane and
> Chris Browne for your help so far.

As I said, no, you need to let the vacuum finish.  It's one table, and
must be vacuumed in one piece.  If you stop it early, you're just
wasting your time.  You cannot do it in pieces, and if you did, that
wouldn't make it complete faster anyways.  That would make it take
MORE time.

Do the vacuum, and hold on until it is done.

Once that vacuum is done, later vacuums will take place MUCH quicker,
and in that you are clearly making enormous numbers of modifications
to the table, you need to vacuum the table regularly.
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://cbbrowne.com/info/spiritual.html
Real Programmers use: "compress -d > a.out"

Re: vacuum pg_attribute causes high load

От
kris pal
Дата:
 
Christopher,
 
Thanks for your detailed response. Now I get the idea why vacuum is causing so much load. We should have vacuumed the database more often but didn't do it as we are more used to managing Oracle databases. This is the first Postgres instance that I am managing.
 
The datafile for production database ("bbs") is abt 6GB, though the actual data is very small, abt 20MB. I used the dump (from pg_dump of "bbs") of the production database  to import into a "test_database" - its datafile size was < 20MB.
 
So instead of going through all these, am planning to have down time of 15 mins and do the following:
 
1)"pg_dump bmgs"
2)"drop the database  bmgs"  
** (this should essentially free up all the space occupied by pg_attribute and its indexes - right ?? )
3)"recreate database bmgs"
and
4)import the dump from step 1) into bmgs created in  step 3)
 
 
Do you think there will be issues with this approach? The system catalog will be built from scratch, and there won't be any data loss right. That way I can reclaim the space and then run Vacuum more often.
 
thanks,
Kris Pal.
 


Christopher Browne <cbbrowne@acm.org> wrote:
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote:
>  
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>> 1) Can you tell me if there is any other work around to vacuum the
>>> pg_attribute system table ? Like doing it in parts so that it won't
>> >cause a high load.
>
> Chris Browne wrote:
>>No, you need to let the vacuum finish.
>>If you stop it early, it'll just increase system load and accomplish _nothing_.
>
>>>>>>>>>>>>>>>>>>>>>>>>>>

> But while doing vacuum the load is getting so high that the system
> is almost freezing.

Right. You said that the table was c onsuming 2GB of space. It has to
read in that 2GB, and discard the bits that aren't necessary.

> Does the performance of 'Vacuum' command depend on other system/
> database parameters?  The 'vacuum pg_attribute" command never
> completes, its causing high load on the server. I had to wait for 15
> mins and kill it.  Because the system load was touching 10, slowing
> down the system therefore not letting anyone access the server.

What's happening, is, at root, that the vacuum process loads all the
data in the table (2GB, you indicated) into memory, looks at it, and
writes it back out.

That presumably leads to about 2GB of reads and 2GB of writes, which
takes a while.

If you want this to go as quickly as possible, shut the database down
and start the postmaster in single user mode so that there's no other
activity competing for the system's resources.

> Can we vacuum the "pg_attribiut e" in parts? So that 'vacuum' will
> complete faster and there won't be load issues. Thanks Tom Lane and
> Chris Browne for your help so far.

As I said, no, you need to let the vacuum finish. It's one table, and
must be vacuumed in one piece. If you stop it early, you're just
wasting your time. You cannot do it in pieces, and if you did, that
wouldn't make it complete faster anyways. That would make it take
MORE time.

Do the vacuum, and hold on until it is done.

Once that vacuum is done, later vacuums will take place MUCH quicker,
and in that you are clearly making enormous numbers of modifications
to the table, you need to vacuum the table regularly.
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://cbbrowne.com/info/spiritual.html
Real Programmers use: "compress -d > a.out"

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!

Re: vacuum pg_attribute causes high load

От
Gaetano Mendola
Дата:
kris pal wrote:

>
> Christopher,
>
> Thanks for your detailed response. Now I get the idea why vacuum is
> causing so much load. We should have vacuumed the database more often
> but didn't do it as we are more used to managing Oracle databases. This
> is the first Postgres instance that I am managing.
>
> The datafile for production database ("bbs") is abt 6GB, though the
> actual data is very small, abt 20MB. I used the dump (from pg_dump of
> "bbs") of the production database  to import into a "test_database" -
> its datafile size was < 20MB.
>
> So instead of going through all these, am planning to have down time of
> 15 mins and do the following:
>
> 1)"pg_dump bmgs"
> 2)"drop the database  bmgs"
> ** (this should essentially free up all the space occupied by
> pg_attribute and its indexes - right ?? )
> 3)"recreate database bmgs"
> and
> 4)import the dump from step 1) into bmgs created in  step 3)
>
>
> Do you think there will be issues with this approach? The system catalog
> will be built from scratch, and there won't be any data loss right. That
> way I can reclaim the space and then run Vacuum more often.

Before to do it be sure that your pg_dump is non affected by not well ordered
objects, if yes you have to reorder the order creation ( manually ) in the dump,
some times happen.


Regards
Gaetano Mendola





Re: vacuum pg_attribute causes high load

От
kris pal
Дата:

 

 Before to do it be sure that your pg_dump is non affected by not well ordered
objects, if yes you have to reorder the order creation ( manually ) in the dump,
some times happen.

>>>>>>>>>>>>>>>>>>>>

I used the dump from the production database(bgms) and imported it into a test database (test). It went through fine with in few minutes. So I guess the ordering of the objects is fine.  My only concern is abt the system catalog.

Do we need anything else from the original database "bgms"  other than the Dump file to recreate it?

thanks,
Kris Pal.


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!

Re: pg_attribute - Index

От
kris pal
Дата:
Thanks to everyone who helped.
Chris, as you said, reading several GB (abt 3 GB) and writing out the data was causing a huge I/O Wait and this caused the heavy load on the server. I tried to Vacuum and even after 40 mins it was still running. Was not sure how much longer it was going to take. I had only a one hour scheduled down time.
 
So, I finally had to drop and reimport the database - it took less than 15 mins for all of this. Its good now. And I will Vacuum it regulary now.
 
1) Even if I vacuum regularly, the index "pg_attribute_relid_attnam_index" is growing very fast because of huge data loads and deletes.
 
What is the easiest way to REINDEX system catalog indexes ? (I couldn't drop it even in single user mode).
 
thanks,
Kris Pal.


Christopher Browne <cbbrowne@acm.org> wrote:
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote:
>  
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>> 1) Can you tell me if there is any other work around to vacuum the
>>> pg_attribute system table ? Like doing it in parts so that it won't
>> >cause a high load.
>
> Chris Browne wrote:
>>No, you need to let the vacuum finish.
>>If you stop it early, it'll just increase system load and accomplish _nothing_.
>
>>>>>>>>>>>>>>>>>>>>>>>>>>

> But while doing vacuum the load is getting so high that the system
> is almost freezing.

Right. You said that the table was c onsuming 2GB of space. It has to
read in that 2GB, and discard the bits that aren't necessary.

> Does the performance of 'Vacuum' command depend on other system/
> database parameters?  The 'vacuum pg_attribute" command never
> completes, its causing high load on the server. I had to wait for 15
> mins and kill it.  Because the system load was touching 10, slowing
> down the system therefore not letting anyone access the server.

What's happening, is, at root, that the vacuum process loads all the
data in the table (2GB, you indicated) into memory, looks at it, and
writes it back out.

That presumably leads to about 2GB of reads and 2GB of writes, which
takes a while.

If you want this to go as quickly as possible, shut the database down
and start the postmaster in single user mode so that there's no other
activity competing for the system's resources.

> Can we vacuum the "pg_attribiut e" in parts? So that 'vacuum' will
> complete faster and there won't be load issues. Thanks Tom Lane and
> Chris Browne for your help so far.

As I said, no, you need to let the vacuum finish. It's one table, and
must be vacuumed in one piece. If you stop it early, you're just
wasting your time. You cannot do it in pieces, and if you did, that
wouldn't make it complete faster anyways. That would make it take
MORE time.

Do the vacuum, and hold on until it is done.

Once that vacuum is done, later vacuums will take place MUCH quicker,
and in that you are clearly making enormous numbers of modifications
to the table, you need to vacuum the table regularly.
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://cbbrowne.com/info/spiritual.html
Real Programmers use: "compress -d > a.out"

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Re: pg_attribute - Index

От
Tom Lane
Дата:
kris pal <kristhegambler@yahoo.com> writes:
> 1) Even if I vacuum regularly, the index
> "pg_attribute_relid_attnam_index" is growing very fast because of huge
> data loads and deletes.

You were running an old version, weren't you?  This should be fixed
in 7.4 ...

            regards, tom lane

Re: pg_attribute - Index

От
kris pal
Дата:
 
I am running 7.4.1 on Linux 2.4.21-15.ELsmp kernel.
 
thanks,
Kris Pal.


Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> 1) Even if I vacuum regularly, the index
> "pg_attribute_relid_attnam_index" is growing very fast because of huge
> data loads and deletes.

You were running an old version, weren't you? This should be fixed
in 7.4 ...

regards, tom lane


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!

Re: repeat Q: pg_attribute - Index

От
kris pal
Дата:
Hi,
 
So can anyone suggest an easy way to REINDEX "pg_attribute_relid_attnam_index" (belongs to System Catalog).
I am running 7.4.1 on Red Hat Linux 2.4.21-15.ELsmp kernel. Frequest data loads / deletes is causing this to grow at a fast rate. 
 
This is becoming a concern. I don't want to drop and reimport the whole database everytime.
 
thanks,
Kris Pal.


kris pal <kristhegambler@yahoo.com> wrote:
I am running 7.4.1 on Linux 2.4.21-15.ELsmp kernel.
 
thanks,
Kris Pal.


Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> 1) Even if I vacuum regularly, the index
> "pg_attribute_relid_attnam_index" is growing very fast because of huge
> data loads and deletes.

You were running an old version, weren't you? This should be fixed
in 7.4 ...

regards, tom lane


Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com