Обсуждение: regression database
Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
kris pal <kristhegambler@yahoo.com> writes: > I am not sure what the implications of completely deleting the > 'Regression' database (I don't need the regression testing). Is it named "Regression" with a capital R? If so, that's not the DB that is made by the standard regression tests --- that is called "regression". There is not any reason not to drop the standard regression database, if you're not using it. But there is no way that that DB would get that large unless something in your installation is using it. I'd suggest you find out what. regards, tom lane
6236920 /home/postgres/data/base/2034862
[postgres@dds base]$
[postgres@dds base]$ cd /home/postgres/data/base/2034862
1049604 --> 1249 (over 1GB)
949484 --> 1249.1 (abt 1GB)
1049604 --> 16609
821104 --> 16609.1
pg_attribute | 1249
--------------+-------------
Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> I am not sure what the implications of completely deleting the
> 'Regression' database (I don't need the regression testing).
Is it named "Regression" with a capital R? If so, that's not the DB that
is made by the standard regression tests --- that is called "regression".
There is not any reason not to drop the standard regression database, if
you're not using it. But there is no way that that DB would get that
large unless something in your installation is using it. I'd suggest
you find out what.
regards, tom lane
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
kris pal <kristhegambler@yahoo.com> wrote:
Hi tom,It is actually named 'regression'. I spoke with the person who build the server but apparently he is not well worsed in Postgres. He didn't explicitly create 'regression', I guess he selected some default installation options.I am not sure how to find out what is causing the 'regression' database to increase in size. Its size is not due to the tables with in the 'regression' but due to the system tables like "pg_attribute" with in 'regression' database. If you chould point to a resource which would help me understand how this 'regression' database is handled internally (what activity writes to this database) that would be great.Pls see below, the directory 2034862 corresponds to 'regression':____________________________________________________[postgres@dds base]$ du -sk /home/postgres/data/base/2034862
6236920 /home/postgres/data/base/2034862
[postgres@dds base]$
[postgres@dds base]$ cd /home/postgres/data/base/2034862[postgres@dds 2034862]$ du -sk * | awk '{if ($1 > 512000) print $1" --> "$2}'
1049604 --> 1249 (over 1GB)
949484 --> 1249.1 (abt 1GB)
1049604 --> 16609
821104 --> 16609.1The relname for the corresponding relfilenode:--------------+-------------
pg_attribute | 1249pg_attribute_relid_attnam_index | 16609
--------------+-------------____________________________________________________So am not sure why this thing is growing progressively.Any feedback/ help is greatly appreciated.thanks,Kris Pal.
Tom Lane <tgl@sss.pgh.pa.us> wrote:kris pal writes:
> I am not sure what the implications of completely deleting the
> 'Regression' database (I don't need the regression testing).
Is it named "Regression" with a capital R? If so, that's not the DB that
is made by the standard regression tests --- that is called "regression".
There is not any reason not to drop the standard regression database, if
you're not using it. But there is no way that that DB would get that
large unless something in your installation is using it. I'd suggest
you find out what.
regards, tom lane
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
kris pal <kristhegambler@yahoo.com> writes: > Actually I just noticed something. Previously for some reason, I > thought the data directory corresponded to 'regression'. But I double > checked it now with oid2name and it corresponds to our main database > say "dds". Ah, that makes more sense. > But the issue still remains. In the sense that pg_attribute etc are > still so huge and I have no idea why. Probably because you've created and deleted a lot of tables (does your app use temp tables a lot?). If you aren't good about vacuuming the system catalogs then you're going to get table bloat. If you're using something pre-7.4 then you may get index bloat even if you *are* vacuuming regularly :-(. My suggestions are to update to 7.4, if you're not there already (if you are, you may need to do VACUUM FULL and REINDEX to get rid of the bloat); then adopt a stricter regime of routine vacuuming. And check your FSM settings. regards, tom lane
#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min 100, ~50 bytes each
Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> Actually I just noticed something. Previously for some reason, I
> thought the data directory corresponded to 'regression'. But I double
> checked it now with oid2name and it corresponds to our main database
> say "dds".
Ah, that makes more sense.
> But the issue still remains. In the sense that pg_attribute etc are
> still so huge and I have no idea why.
Probably because you've created and deleted a lot of tables (does your
app use temp tables a lot?). If you aren't good about vacuuming the
system catalogs then you're going to get table bloat. If you're using
something pre-7.4 then you may get index bloat even if you *are*
vacuuming regularly :-(.
My suggestions are to update to 7.4, if you're not there already
(if you are, you may need to do VACUUM FULL an d REINDEX to get rid
of the bloat); then adopt a stricter regime of routine vacuuming.
And check your FSM settings.
regards, tom lane
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
kris pal <kristhegambler@yahoo.com> writes: > * So how can I REINDEX this one ? Read the REINDEX man page. There are special hoops you have to jump through when reindexing system catalogs (fewer in more recent versions, but I forget what you are running. Anyway be sure to read the manual *for your version*). regards, tom lane
kristhegambler@yahoo.com (kris pal) writes: > The server is running Postgres 7.4.1 on Linux. > > I tried to do a "vacuum pg_attribute" but its taking a while (I guess because of the size- abt 2GB) and > the load on the server is going very high. Because of this the database server is not responding. So I > killed the psql session. > > 1) Can you tell me if there is any other work around to vacuum the > pg_attribute system table ? Like doing it in parts so that it won't > cause a high load. No, you need to let the vacuum finish. If you stop it early, it'll just increase system load and accomplish _nothing_. > 2) > > > > Also I want to REINDEX by doing: > > ---------------------------------------------------------------------------- > > drop index pg_attribute_relid_attnam_index; > > vacuum pg_attribute; > > create the index again; > > vacuum analyze pg_attribute; > > ---------------------------------------------------------------------------- > > But I get the : "ERROR: permission denied: "pg_attribute_relid_attnam_index" is a system catalog" > > * So how can I REINDEX this one ? In order to reindex system tables, you must shut the database down and go in in single user mode. <http://www.postgresql.org/docs/current/static/app-postgres.html> That's a reasonable time to do a VACUUM FULL on pg_attribute; you can be 100% certain that it won't interfere with anything else. > 3) The config file has: > > #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each > #max_fsm_relations = 1000 # min 100, ~50 bytes each > > So I guess Postgres is using the defaults. Where can I find the > current values or default values. Do you know if I can find some > reading material abt FSM settings and default values? At the end of a VACUUM on everything, FSM information is reported, thus: INFO: free space map: 605 relations, 2039 pages stored; 10896 total pages needed DETAIL: Allocated FSM size: 1000 relations + 30000 pages = 237 kB shared memory. VACUUM The "10896 total pages needed" indicates how much the vacuum needed; my database evidently has things set reasonably appropriately. -- (reverse (concatenate 'string "gro.mca" "@" "enworbbc")) http://cbbrowne.com/info/oses.html "I don't do drugs anymore 'cause I find I get the same effect just by standing up really fast." -- Jonathan Katz
>> 1) Can you tell me if there is any other work around to vacuum the
>> pg_attribute system table ? Like doing it in parts so that it won't
> >cause a high load.
Chris Browne <cbbrowne@acm.org> wrote:
>No, you need to let the vacuum finish.
>If you stop it early, it'll just increase system load and accomplish _nothing_.
Thanks Tom Lane and Chris Browne for your help so far.
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote: > > >>>>>>>>>>>>>>>>>>>>>>>>>> >>> 1) Can you tell me if there is any other work around to vacuum the >>> pg_attribute system table ? Like doing it in parts so that it won't >> >cause a high load. > > Chris Browne <cbbrowne@acm.org> wrote: >>No, you need to let the vacuum finish. >>If you stop it early, it'll just increase system load and accomplish _nothing_. > >>>>>>>>>>>>>>>>>>>>>>>>>> > But while doing vacuum the load is getting so high that the system > is almost freezing. Right. You said that the table was consuming 2GB of space. It has to read in that 2GB, and discard the bits that aren't necessary. > Does the performance of 'Vacuum' command depend on other system/ > database parameters? The 'vacuum pg_attribute" command never > completes, its causing high load on the server. I had to wait for 15 > mins and kill it. Because the system load was touching 10, slowing > down the system therefore not letting anyone access the server. What's happening, is, at root, that the vacuum process loads all the data in the table (2GB, you indicated) into memory, looks at it, and writes it back out. That presumably leads to about 2GB of reads and 2GB of writes, which takes a while. If you want this to go as quickly as possible, shut the database down and start the postmaster in single user mode so that there's no other activity competing for the system's resources. > Can we vacuum the "pg_attribiute" in parts? So that 'vacuum' will > complete faster and there won't be load issues. Thanks Tom Lane and > Chris Browne for your help so far. As I said, no, you need to let the vacuum finish. It's one table, and must be vacuumed in one piece. If you stop it early, you're just wasting your time. You cannot do it in pieces, and if you did, that wouldn't make it complete faster anyways. That would make it take MORE time. Do the vacuum, and hold on until it is done. Once that vacuum is done, later vacuums will take place MUCH quicker, and in that you are clearly making enormous numbers of modifications to the table, you need to vacuum the table regularly. -- select 'cbbrowne' || '@' || 'ntlug.org'; http://cbbrowne.com/info/spiritual.html Real Programmers use: "compress -d > a.out"
Christopher Browne <cbbrowne@acm.org> wrote:
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote:
>
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>> 1) Can you tell me if there is any other work around to vacuum the
>>> pg_attribute system table ? Like doing it in parts so that it won't
>> >cause a high load.
>
> Chris Browne wrote:
>>No, you need to let the vacuum finish.
>>If you stop it early, it'll just increase system load and accomplish _nothing_.
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
> But while doing vacuum the load is getting so high that the system
> is almost freezing.
Right. You said that the table was c onsuming 2GB of space. It has to
read in that 2GB, and discard the bits that aren't necessary.
> Does the performance of 'Vacuum' command depend on other system/
> database parameters? The 'vacuum pg_attribute" command never
> completes, its causing high load on the server. I had to wait for 15
> mins and kill it. Because the system load was touching 10, slowing
> down the system therefore not letting anyone access the server.
What's happening, is, at root, that the vacuum process loads all the
data in the table (2GB, you indicated) into memory, looks at it, and
writes it back out.
That presumably leads to about 2GB of reads and 2GB of writes, which
takes a while.
If you want this to go as quickly as possible, shut the database down
and start the postmaster in single user mode so that there's no other
activity competing for the system's resources.
> Can we vacuum the "pg_attribiut e" in parts? So that 'vacuum' will
> complete faster and there won't be load issues. Thanks Tom Lane and
> Chris Browne for your help so far.
As I said, no, you need to let the vacuum finish. It's one table, and
must be vacuumed in one piece. If you stop it early, you're just
wasting your time. You cannot do it in pieces, and if you did, that
wouldn't make it complete faster anyways. That would make it take
MORE time.
Do the vacuum, and hold on until it is done.
Once that vacuum is done, later vacuums will take place MUCH quicker,
and in that you are clearly making enormous numbers of modifications
to the table, you need to vacuum the table regularly.
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://cbbrowne.com/info/spiritual.html
Real Programmers use: "compress -d > a.out"
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
kris pal wrote: > > Christopher, > > Thanks for your detailed response. Now I get the idea why vacuum is > causing so much load. We should have vacuumed the database more often > but didn't do it as we are more used to managing Oracle databases. This > is the first Postgres instance that I am managing. > > The datafile for production database ("bbs") is abt 6GB, though the > actual data is very small, abt 20MB. I used the dump (from pg_dump of > "bbs") of the production database to import into a "test_database" - > its datafile size was < 20MB. > > So instead of going through all these, am planning to have down time of > 15 mins and do the following: > > 1)"pg_dump bmgs" > 2)"drop the database bmgs" > ** (this should essentially free up all the space occupied by > pg_attribute and its indexes - right ?? ) > 3)"recreate database bmgs" > and > 4)import the dump from step 1) into bmgs created in step 3) > > > Do you think there will be issues with this approach? The system catalog > will be built from scratch, and there won't be any data loss right. That > way I can reclaim the space and then run Vacuum more often. Before to do it be sure that your pg_dump is non affected by not well ordered objects, if yes you have to reorder the order creation ( manually ) in the dump, some times happen. Regards Gaetano Mendola
Before to do it be sure that your pg_dump is non affected by not well ordered
objects, if yes you have to reorder the order creation ( manually ) in the dump,
some times happen.>>>>>>>>>>>>>>>>>>>>
I used the dump from the production database(bgms) and imported it into a test database (test). It went through fine with in few minutes. So I guess the ordering of the objects is fine. My only concern is abt the system catalog.
Do we need anything else from the original database "bgms" other than the Dump file to recreate it?
thanks,
Kris Pal.
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
Christopher Browne <cbbrowne@acm.org> wrote:
After a long battle with technology, kristhegambler@yahoo.com (kris pal), an earthling, wrote:
>
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>> 1) Can you tell me if there is any other work around to vacuum the
>>> pg_attribute system table ? Like doing it in parts so that it won't
>> >cause a high load.
>
> Chris Browne wrote:
>>No, you need to let the vacuum finish.
>>If you stop it early, it'll just increase system load and accomplish _nothing_.
>
>>>>>>>>>>>>>>>>>>>>>>>>>>
> But while doing vacuum the load is getting so high that the system
> is almost freezing.
Right. You said that the table was c onsuming 2GB of space. It has to
read in that 2GB, and discard the bits that aren't necessary.
> Does the performance of 'Vacuum' command depend on other system/
> database parameters? The 'vacuum pg_attribute" command never
> completes, its causing high load on the server. I had to wait for 15
> mins and kill it. Because the system load was touching 10, slowing
> down the system therefore not letting anyone access the server.
What's happening, is, at root, that the vacuum process loads all the
data in the table (2GB, you indicated) into memory, looks at it, and
writes it back out.
That presumably leads to about 2GB of reads and 2GB of writes, which
takes a while.
If you want this to go as quickly as possible, shut the database down
and start the postmaster in single user mode so that there's no other
activity competing for the system's resources.
> Can we vacuum the "pg_attribiut e" in parts? So that 'vacuum' will
> complete faster and there won't be load issues. Thanks Tom Lane and
> Chris Browne for your help so far.
As I said, no, you need to let the vacuum finish. It's one table, and
must be vacuumed in one piece. If you stop it early, you're just
wasting your time. You cannot do it in pieces, and if you did, that
wouldn't make it complete faster anyways. That would make it take
MORE time.
Do the vacuum, and hold on until it is done.
Once that vacuum is done, later vacuums will take place MUCH quicker,
and in that you are clearly making enormous numbers of modifications
to the table, you need to vacuum the table regularly.
--
select 'cbbrowne' || '@' || 'ntlug.org';
http://cbbrowne.com/info/spiritual.html
Real Programmers use: "compress -d > a.out"
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
kris pal <kristhegambler@yahoo.com> writes: > 1) Even if I vacuum regularly, the index > "pg_attribute_relid_attnam_index" is growing very fast because of huge > data loads and deletes. You were running an old version, weren't you? This should be fixed in 7.4 ... regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> wrote:
kris pal writes:
> 1) Even if I vacuum regularly, the index
> "pg_attribute_relid_attnam_index" is growing very fast because of huge
> data loads and deletes.
You were running an old version, weren't you? This should be fixed
in 7.4 ...
regards, tom lane
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
kris pal <kristhegambler@yahoo.com> wrote:
I am running 7.4.1 on Linux 2.4.21-15.ELsmp kernel.thanks,Kris Pal.
Tom Lane <tgl@sss.pgh.pa.us> wrote:kris pal writes:
> 1) Even if I vacuum regularly, the index
> "pg_attribute_relid_attnam_index" is growing very fast because of huge
> data loads and deletes.
You were running an old version, weren't you? This should be fixed
in 7.4 ...
regards, tom lane
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com