Обсуждение: commits in functions

Поиск
Список
Период
Сортировка

commits in functions

От
"Sriram Dandapani"
Дата:

Hi

 

Is there a way to issue commits in long running functions. Currently, I make calls out to java functions for transaction blocks that need committing at frequent intervals.

 

The begin..end  savepoint feature is nice, but what I am looking for is a way to reduce resources/log file generation by issuing frequent commits.

Orphan files

От
dzung nguyen the
Дата:
Our database files are becoming bigger and bigger.
We checked OID with filenodes and found some filenodes
that don't have a corresponding OID.

Can I delete these files ? Or give me an advice, please.

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Re: Orphan files

От
Tom Lane
Дата:
dzung nguyen the <dzungthenguyen@yahoo.com> writes:
> Our database files are becoming bigger and bigger.
> We checked OID with filenodes and found some filenodes
> that don't have a corresponding OID.

OID?  Or relfilenode?  Please read
http://www.postgresql.org/docs/8.1/static/storage.html
pretty darn carefully before you go around deleting files
that you don't know the purpose of.

            regards, tom lane

Re: Orphan files

От
Robin Iddon
Дата:
> Can I delete these files ? Or give me an advice, please.
>

When I had a similar problem it turned out that I had exceeded the
limits of my FSM and thus the stats table in particular was growing as
it was unable to reuse rows.

Check your FSM size in postgresql.conf and see what:

max_fsm_pages
max_fsm_relations

are set to.  Note that these are set for the whole cluster not per database.

To recover the lost space you will need to do a vacuum full at some
point, I think.  This is going to take a while and it's going to affect
people's access to the database, so do it when the database would
otherwise be idle if at all possible.  In any case if you run

vacuumdb -avf

it will compact your tables and at the end, it prints out the FSM
settings you should have had for the expired data space that was in your
database.  If those numbers are larger than the settings you have, then
you have blown the FSM and need to increase them or increase the
frequency at which you vacuum.  Remember FSM is cluster-wide so you must
use "-a" above otherswise the FSM output at the end won't be meaningful.

The problem I had was that (a) my FSM was too small anyway but (b) I had
left the default settings for pg_autovacuum (I use 8.0 at present) and
these weren't right - some tables were never vacuumed.

Note also that indexes sometimes need to be reindexed to recover free
space.  Exactly how you do that depends on which version of postgres you
are running.

Finally, to see how big PG thinks your tables are try querying pg_class
and check out relpages/reltuples (but you also need to understand how
toast works to get the full picture - hence read the storage documentation).