Обсуждение: Analyze on large changes...

Поиск
Список
Период
Сортировка

Analyze on large changes...

От
"Rod Taylor"
Дата:
I've run into an interesting issue.  A very long running transaction
doing data loads is getting quite slow.  I really don't want to break
up the transactions (and for now it's ok), but it makes me wonder what
exactly analyze counts.

Since dead, or yet to be visible tuples affect the plan that should be
taken (until vacuum anyway) are these numbers reflected in the stats
anywhere?

Took an empty table, with a transaction I inserted a number of records
and before comitting I ran analyze.

Analyze obviously saw the table as empty, as the pg_statistic row for
that relation doesn't exist.

Commit, then analyze again and the values were taken into account.


Certainly for large dataloads doing an analyze on the table after a
substantial (non-comitted) change has taken place would be worth while
for all elements involved.  An index scan on the visible records may
be faster, but on the actual tuples in the table a sequential scan
might be best.

Of course, for small transactions no-effect will be seen.  But this
may help with the huge dataloads, especially where triggers or
constraints are in effect.
--
Rod



Re: Analyze on large changes...

От
Tom Lane
Дата:
"Rod Taylor" <rbt@zort.ca> writes:
> Since dead, or yet to be visible tuples affect the plan that should be
> taken (until vacuum anyway) are these numbers reflected in the stats
> anywhere?

Analyze just uses SnapshotNow visibility rules, so it sees the same set
of tuples that you would see if you did a SELECT.

It might be interesting to try to estimate the fraction of dead tuples
in the table, though I'm not sure quite how to fold that into the cost
estimates.  [ thinks... ]  Actually I think we might just be
double-counting if we did.  The dead tuples surely should not count as
part of the number of returned rows.  We already do account for the
I/O effort to read them (because I/O is estimated based on the total
number of blocks in the table, which will include the space used by
dead tuples).  We're only missing the CPU time involved in the tuple
validity check, which is pretty small.

> Took an empty table, with a transaction I inserted a number of records
> and before comitting I ran analyze.

I tried to repeat this:

regression=# begin;
BEGIN
regression=# create table foo (f1 int);
CREATE
regression=# insert into foo [ ... some data ... ]

regression=# analyze foo;
ERROR:  ANALYZE cannot run inside a BEGIN/END block

This seems a tad silly; I can't see any reason why ANALYZE couldn't be
done inside a BEGIN block.  I think this is just a hangover from
ANALYZE's origins as part of VACUUM.  Can anyone see a reason not to
allow it?
        regards, tom lane


Re: Analyze on large changes...

От
Lincoln Yeoh
Дата:
Hi Tom,

(Please correct me where I'm wrong)

Is it possible to reduce the performance impact of dead tuples esp when the
index is used? Right now performance goes down gradually till we vacuum
(something like a 1/x curve).

My limited understanding of current behaviour is the search for a valid
row's tuple goes from older tuples to newer ones via forward links (based
on some old docs[1]).

How about searching from newer tuples to older tuples instead, using
backward links?

My assumption is newer tuples are more likely to be wanted than older ones
- and so the number of tuples to search through will be less this way.

**If index update is ok.
If a tuple is inserted, the index record is updated to point to inserted
tuple, and the inserted tuple is made to point to a previous tuple.
e.g.

Index-> old tuple->older tuple->oldest tuple
Index-> New tuple->old tuple->older tuple->oldest tuple

**if index update not desirable
Index points to first tuple (valid or not).

If a tuple is inserted, the first tuple is updated to point to inserted
tuple, and the inserted tuple is made to point to a previous tuple.
e.g.

Index-> first tuple->old tuple->older tuple->oldest tuple
Index-> first tuple-> New tuple->old tuple->older tuple->oldest tuple

If this is done performance might not deterioriate as much when using index
scans right? I'm not sure if a backward links would help for sequential
scans, which are usually best done forward.

Regards,
Link.

[1] http://developer.postgresql.org/pdf/transactions.pdf
Tuple headers contain:
• xmin: transaction ID of inserting transaction
• xmax: transaction ID of replacing/ deleting transaction (initially NULL)
• forward link: link to newer version of same logical row, if any
Basic idea: tuple is visible if xmin is valid and xmax is not. "Valid"
means
"either committed or the current transaction".
If we plan to update rather than delete, we first add new version of row
to table, then set xmax and forward link in old tuple. Forward link will
be needed by concurrent updaters (but not by readers).

At 10:53 AM 5/1/02 -0400, Tom Lane wrote:
>estimates.  [ thinks... ]  Actually I think we might just be
>double-counting if we did.  The dead tuples surely should not count as
>part of the number of returned rows.  We already do account for the
>I/O effort to read them (because I/O is estimated based on the total





Re: Analyze on large changes...

От
Tom Lane
Дата:
Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> My limited understanding of current behaviour is the search for a valid 
> row's tuple goes from older tuples to newer ones via forward links

No.  Each tuple is independently indexed and independently visited.
Given the semantics of MVCC I think that's correct --- after all, what's
dead to you is not necessarily dead to someone else.

There's been some speculation about trying to determine whether a dead
tuple is dead to everyone (essentially the same test VACUUM makes), and
if so propagating a marker back to the index tuple so that future
indexscans don't have to make useless visits to the heap tuple.
(I don't think we want to try to actually remove the index tuple; that's
VACUUM's job, and there are locking problems if we try to make it happen
in plain SELECTs.  But we could set a marker bit in the index entry.)
Under normal circumstances where all transactions are short, it wouldn't
be very long before a dead tuple could be marked, so this should fix the
performance issue.  Doing it in a reasonably clean fashion is the sticky
part.
        regards, tom lane


Search from newer tuples first, vs older tuples first?

От
Lincoln Yeoh
Дата:
At 02:10 PM 5/1/02 -0400, Tom Lane wrote:
>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> > My limited understanding of current behaviour is the search for a valid
> > row's tuple goes from older tuples to newer ones via forward links
>
>No.  Each tuple is independently indexed and independently visited.
>Given the semantics of MVCC I think that's correct --- after all, what's
>dead to you is not necessarily dead to someone else.

But does Postgresql visit the older tuples first moving to the newer ones, 
or the newer ones first? From observation it seems to be starting from the 
older ones. I'm thinking visiting the newer ones first would be better. 
Would that reduce the slowing down effect?

Anyway, are you saying:
Index row X entry #1 -> oldest tuple
...
Index row X entry #2 -> older tuple
...
Index row X entry #3 -> old tuple
...
Index row X entry #4 -> just inserted tuple

And a search for a valid tuple goes through each index entry and visits 
each tuple to see if it is visible.

That seems like a lot of work to do, any docs/urls which explain this? Are 
the index tuples for the same row generally in the same physical location?

Whereas the following still looks like less work and still compatible with 
MVCC:
index tuple -> new tuple -> rolled back tuple -> old tuple -> older tuple.

Just one index tuple per row. The tuples are checked from newer to older 
for visibility via backward links.

The docs I mentioned say updates use the forward links. Repeated updates 
definitely slow down, so backward links might help?

Regards,
Link.





Re: Search from newer tuples first, vs older tuples first?

От
Tom Lane
Дата:
Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> But does Postgresql visit the older tuples first moving to the newer ones, 
> or the newer ones first?

It's going to visit them *all*.  Reordering won't improve the
performance.

FWIW I think that with the present implementation of btree, the newer
tuples actually will be visited first --- when inserting a duplicate
key, the new entry will be inserted to the left of the equal key(s)
already present.  But it doesn't matter.  The only way to speed this
up is to eliminate some of the visitings, which requires keeping more
info in the index than we presently do.
        regards, tom lane


Re: Search from newer tuples first, vs older tuples first?

От
Lincoln Yeoh
Дата:
At 12:49 AM 5/2/02 -0400, Tom Lane wrote:
>Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> > But does Postgresql visit the older tuples first moving to the newer ones,
> > or the newer ones first?
>
>It's going to visit them *all*.  Reordering won't improve the
>performance.

Ack! I thought it went through them till the first valid tuple and was just 
going the wrong way.

>FWIW I think that with the present implementation of btree, the newer
>tuples actually will be visited first --- when inserting a duplicate
>key, the new entry will be inserted to the left of the equal key(s)
>already present.  But it doesn't matter.  The only way to speed this
>up is to eliminate some of the visitings, which requires keeping more
>info in the index than we presently do.

OK I'm starting to get it :). Will the index behaviour be changed soon?

Hmm, then what are the row tuple forward links for? Why forward?

Regards,
Link.



Re: Search from newer tuples first, vs older tuples first?

От
Tom Lane
Дата:
Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> OK I'm starting to get it :). Will the index behaviour be changed soon?

When someone steps up and does it.  I've learned not to predict
schedules for this project.

> Hmm, then what are the row tuple forward links for? Why forward?

Updates in READ COMMITTED mode have to be able to find the newest
version of a tuple they've already found.
        regards, tom lane


Re: Search from newer tuples first, vs older tuples first?

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Lincoln Yeoh <lyeoh@pop.jaring.my> writes:
> > OK I'm starting to get it :). Will the index behaviour be changed soon?
> 
> When someone steps up and does it.  I've learned not to predict
> schedules for this project.

It is not that hard to implement, just messy.  When the index returns a
heap row and the heap row is viewed for visibility, if _no_one_ can see
the row, the index can be marked as expired.  It could be a single bit
in the index tuple, and doesn't need to be flushed to disk, though the
index page has to be marked as dirty.  However, we are going to need to
flush a pre-change image to WAL so it may as well be handled as a normal
index page change.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Search from newer tuples first, vs older tuples first?

От
Tom Lane
Дата:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> It is not that hard to implement, just messy.  When the index returns a
> heap row and the heap row is viewed for visibility, if _no_one_ can see
> the row, the index can be marked as expired.  It could be a single bit
> in the index tuple, and doesn't need to be flushed to disk, though the
> index page has to be marked as dirty.  However, we are going to need to
> flush a pre-change image to WAL so it may as well be handled as a normal
> index page change.

This did actually get done while you were on vacation.  It does *not*
need a WAL entry, on the same principle that setting XMIN_COMMITTED,
XMAX_ABORTED, etc hint bits do not need WAL entries --- namely the
bits can always get set again if they are lost in a crash.
        regards, tom lane


Re: Search from newer tuples first, vs older tuples first?

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > It is not that hard to implement, just messy.  When the index returns a
> > heap row and the heap row is viewed for visibility, if _no_one_ can see
> > the row, the index can be marked as expired.  It could be a single bit
> > in the index tuple, and doesn't need to be flushed to disk, though the
> > index page has to be marked as dirty.  However, we are going to need to
> > flush a pre-change image to WAL so it may as well be handled as a normal
> > index page change.
> 
> This did actually get done while you were on vacation.  It does *not*
> need a WAL entry, on the same principle that setting XMIN_COMMITTED,
> XMAX_ABORTED, etc hint bits do not need WAL entries --- namely the
> bits can always get set again if they are lost in a crash.

Oh, thanks.  That is great news.  I am having trouble determining when a
thread ends so please be patient with me.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Search from newer tuples first, vs older tuples first?

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > It is not that hard to implement, just messy.  When the index returns a
> > heap row and the heap row is viewed for visibility, if _no_one_ can see
> > the row, the index can be marked as expired.  It could be a single bit
> > in the index tuple, and doesn't need to be flushed to disk, though the
> > index page has to be marked as dirty.  However, we are going to need to
> > flush a pre-change image to WAL so it may as well be handled as a normal
> > index page change.
> 
> This did actually get done while you were on vacation.  It does *not*
> need a WAL entry, on the same principle that setting XMIN_COMMITTED,
> XMAX_ABORTED, etc hint bits do not need WAL entries --- namely the
> bits can always get set again if they are lost in a crash.

TODO item marked as done:
* -Add deleted bit to index tuples to reduce heap access

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Analyze on large changes...

От
Bruce Momjian
Дата:
Tom Lane wrote:
> I tried to repeat this:
>
> regression=# begin;
> BEGIN
> regression=# create table foo (f1 int);
> CREATE
> regression=# insert into foo [ ... some data ... ]
>
> regression=# analyze foo;
> ERROR:  ANALYZE cannot run inside a BEGIN/END block
>
> This seems a tad silly; I can't see any reason why ANALYZE couldn't be
> done inside a BEGIN block.  I think this is just a hangover from
> ANALYZE's origins as part of VACUUM.  Can anyone see a reason not to
> allow it?

The following patch allows analyze to be run inside a transaction.
Vacuum and vacuum analyze still can not be run in a transaction.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
Index: src/backend/commands/analyze.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/commands/analyze.c,v
retrieving revision 1.35
diff -c -r1.35 analyze.c
*** src/backend/commands/analyze.c    24 May 2002 18:57:55 -0000    1.35
--- src/backend/commands/analyze.c    11 Jun 2002 21:38:51 -0000
***************
*** 156,170 ****
          elevel = DEBUG1;

      /*
-      * Begin a transaction for analyzing this relation.
-      *
-      * Note: All memory allocated during ANALYZE will live in
-      * TransactionCommandContext or a subcontext thereof, so it will all
-      * be released by transaction commit at the end of this routine.
-      */
-     StartTransactionCommand();
-
-     /*
       * Check for user-requested abort.    Note we want this to be inside a
       * transaction, so xact.c doesn't issue useless WARNING.
       */
--- 156,161 ----
***************
*** 177,186 ****
      if (!SearchSysCacheExists(RELOID,
                                ObjectIdGetDatum(relid),
                                0, 0, 0))
-     {
-         CommitTransactionCommand();
          return;
-     }

      /*
       * Open the class, getting only a read lock on it, and check
--- 168,174 ----
***************
*** 196,202 ****
              elog(WARNING, "Skipping \"%s\" --- only table or database owner can ANALYZE it",
                   RelationGetRelationName(onerel));
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 184,189 ----
***************
*** 211,217 ****
              elog(WARNING, "Skipping \"%s\" --- can not process indexes, views or special system tables",
                   RelationGetRelationName(onerel));
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 198,203 ----
***************
*** 222,228 ****
          strcmp(RelationGetRelationName(onerel), StatisticRelationName) == 0)
      {
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 208,213 ----
***************
*** 283,289 ****
      if (attr_cnt <= 0)
      {
          relation_close(onerel, NoLock);
-         CommitTransactionCommand();
          return;
      }

--- 268,273 ----
***************
*** 370,378 ****
       * entries we made in pg_statistic.)
       */
      relation_close(onerel, NoLock);
-
-     /* Commit and release working memory */
-     CommitTransactionCommand();
  }

  /*
--- 354,359 ----
Index: src/backend/commands/vacuum.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v
retrieving revision 1.226
diff -c -r1.226 vacuum.c
*** src/backend/commands/vacuum.c    24 May 2002 18:57:56 -0000    1.226
--- src/backend/commands/vacuum.c    11 Jun 2002 21:38:59 -0000
***************
*** 110,117 ****


  /* non-export function prototypes */
- static void vacuum_init(VacuumStmt *vacstmt);
- static void vacuum_shutdown(VacuumStmt *vacstmt);
  static List *getrels(const RangeVar *vacrel, const char *stmttype);
  static void vac_update_dbstats(Oid dbid,
                     TransactionId vacuumXID,
--- 110,115 ----
***************
*** 178,190 ****
       * user's transaction too, which would certainly not be the desired
       * behavior.
       */
!     if (IsTransactionBlock())
          elog(ERROR, "%s cannot run inside a BEGIN/END block", stmttype);

      /* Running VACUUM from a function would free the function context */
!     if (!MemoryContextContains(QueryContext, vacstmt))
          elog(ERROR, "%s cannot be executed from a function", stmttype);
!
      /*
       * Send info about dead objects to the statistics collector
       */
--- 176,188 ----
       * user's transaction too, which would certainly not be the desired
       * behavior.
       */
!     if (vacstmt->vacuum && IsTransactionBlock())
          elog(ERROR, "%s cannot run inside a BEGIN/END block", stmttype);

      /* Running VACUUM from a function would free the function context */
!     if (vacstmt->vacuum && !MemoryContextContains(QueryContext, vacstmt))
          elog(ERROR, "%s cannot be executed from a function", stmttype);
!
      /*
       * Send info about dead objects to the statistics collector
       */
***************
*** 207,215 ****
      vrl = getrels(vacstmt->relation, stmttype);

      /*
!      * Start up the vacuum cleaner.
       */
!     vacuum_init(vacstmt);

      /*
       * Process each selected relation.    We are careful to process each
--- 205,255 ----
      vrl = getrels(vacstmt->relation, stmttype);

      /*
!      *        Formerly, there was code here to prevent more than one VACUUM from
!      *        executing concurrently in the same database.  However, there's no
!      *        good reason to prevent that, and manually removing lockfiles after
!      *        a vacuum crash was a pain for dbadmins.  So, forget about lockfiles,
!      *        and just rely on the locks we grab on each target table
!      *        to ensure that there aren't two VACUUMs running on the same table
!      *        at the same time.
!      *
!      *        The strangeness with committing and starting transactions in the
!      *        init and shutdown routines is due to the fact that the vacuum cleaner
!      *        is invoked via an SQL command, and so is already executing inside
!      *        a transaction.    We need to leave ourselves in a predictable state
!      *        on entry and exit to the vacuum cleaner.  We commit the transaction
!      *        started in PostgresMain() inside vacuum_init(), and start one in
!      *        vacuum_shutdown() to match the commit waiting for us back in
!      *        PostgresMain().
       */
!     if (vacstmt->vacuum)
!     {
!         if (vacstmt->relation == NULL)
!         {
!             /*
!              * Compute the initially applicable OldestXmin and FreezeLimit
!              * XIDs, so that we can record these values at the end of the
!              * VACUUM. Note that individual tables may well be processed with
!              * newer values, but we can guarantee that no (non-shared)
!              * relations are processed with older ones.
!              *
!              * It is okay to record non-shared values in pg_database, even though
!              * we may vacuum shared relations with older cutoffs, because only
!              * the minimum of the values present in pg_database matters.  We
!              * can be sure that shared relations have at some time been
!              * vacuumed with cutoffs no worse than the global minimum; for, if
!              * there is a backend in some other DB with xmin = OLDXMIN that's
!              * determining the cutoff with which we vacuum shared relations,
!              * it is not possible for that database to have a cutoff newer
!              * than OLDXMIN recorded in pg_database.
!              */
!             vacuum_set_xid_limits(vacstmt, false,
!                                   &initialOldestXmin, &initialFreezeLimit);
!         }
!
!         /* matches the StartTransaction in PostgresMain() */
!         CommitTransactionCommand();
!     }

      /*
       * Process each selected relation.    We are careful to process each
***************
*** 225,305 ****
          if (vacstmt->vacuum)
              vacuum_rel(relid, vacstmt, RELKIND_RELATION);
          if (vacstmt->analyze)
              analyze_rel(relid, vacstmt);
      }

      /* clean up */
!     vacuum_shutdown(vacstmt);
! }
!
! /*
!  *    vacuum_init(), vacuum_shutdown() -- start up and shut down the vacuum cleaner.
!  *
!  *        Formerly, there was code here to prevent more than one VACUUM from
!  *        executing concurrently in the same database.  However, there's no
!  *        good reason to prevent that, and manually removing lockfiles after
!  *        a vacuum crash was a pain for dbadmins.  So, forget about lockfiles,
!  *        and just rely on the locks we grab on each target table
!  *        to ensure that there aren't two VACUUMs running on the same table
!  *        at the same time.
!  *
!  *        The strangeness with committing and starting transactions in the
!  *        init and shutdown routines is due to the fact that the vacuum cleaner
!  *        is invoked via an SQL command, and so is already executing inside
!  *        a transaction.    We need to leave ourselves in a predictable state
!  *        on entry and exit to the vacuum cleaner.  We commit the transaction
!  *        started in PostgresMain() inside vacuum_init(), and start one in
!  *        vacuum_shutdown() to match the commit waiting for us back in
!  *        PostgresMain().
!  */
! static void
! vacuum_init(VacuumStmt *vacstmt)
! {
!     if (vacstmt->vacuum && vacstmt->relation == NULL)
      {
!         /*
!          * Compute the initially applicable OldestXmin and FreezeLimit
!          * XIDs, so that we can record these values at the end of the
!          * VACUUM. Note that individual tables may well be processed with
!          * newer values, but we can guarantee that no (non-shared)
!          * relations are processed with older ones.
!          *
!          * It is okay to record non-shared values in pg_database, even though
!          * we may vacuum shared relations with older cutoffs, because only
!          * the minimum of the values present in pg_database matters.  We
!          * can be sure that shared relations have at some time been
!          * vacuumed with cutoffs no worse than the global minimum; for, if
!          * there is a backend in some other DB with xmin = OLDXMIN that's
!          * determining the cutoff with which we vacuum shared relations,
!          * it is not possible for that database to have a cutoff newer
!          * than OLDXMIN recorded in pg_database.
!          */
!         vacuum_set_xid_limits(vacstmt, false,
!                               &initialOldestXmin, &initialFreezeLimit);
!     }
!
!     /* matches the StartTransaction in PostgresMain() */
!     CommitTransactionCommand();
! }
!
! static void
! vacuum_shutdown(VacuumStmt *vacstmt)
! {
!     /* on entry, we are not in a transaction */

!     /* matches the CommitTransaction in PostgresMain() */
!     StartTransactionCommand();

!     /*
!      * If we did a database-wide VACUUM, update the database's pg_database
!      * row with info about the transaction IDs used, and try to truncate
!      * pg_clog.
!      */
!     if (vacstmt->vacuum && vacstmt->relation == NULL)
!     {
!         vac_update_dbstats(MyDatabaseId,
!                            initialOldestXmin, initialFreezeLimit);
!         vac_truncate_clog(initialOldestXmin, initialFreezeLimit);
      }

      /*
--- 265,303 ----
          if (vacstmt->vacuum)
              vacuum_rel(relid, vacstmt, RELKIND_RELATION);
          if (vacstmt->analyze)
+         {
+             /* If we vacuumed, use new transaction for analyze. */
+             if (vacstmt->vacuum)
+                 StartTransactionCommand();
              analyze_rel(relid, vacstmt);
+             if (vacstmt->vacuum)
+                 CommitTransactionCommand();
+ /*
+             else
+                 MemoryContextReset();
+ */
+         }
      }

      /* clean up */
!     if (vacstmt->vacuum)
      {
!         /* on entry, we are not in a transaction */

!         /* matches the CommitTransaction in PostgresMain() */
!         StartTransactionCommand();

!         /*
!          * If we did a database-wide VACUUM, update the database's pg_database
!          * row with info about the transaction IDs used, and try to truncate
!          * pg_clog.
!          */
!         if (vacstmt->relation == NULL)
!         {
!             vac_update_dbstats(MyDatabaseId,
!                                initialOldestXmin, initialFreezeLimit);
!             vac_truncate_clog(initialOldestXmin, initialFreezeLimit);
!         }
      }

      /*

Re: Analyze on large changes...

От
Bruce Momjian
Дата:
Bruce Momjian wrote:
> Tom Lane wrote:
> > I tried to repeat this:
> > 
> > regression=# begin;
> > BEGIN
> > regression=# create table foo (f1 int);
> > CREATE
> > regression=# insert into foo [ ... some data ... ]
> > 
> > regression=# analyze foo;
> > ERROR:  ANALYZE cannot run inside a BEGIN/END block
> > 
> > This seems a tad silly; I can't see any reason why ANALYZE couldn't be
> > done inside a BEGIN block.  I think this is just a hangover from
> > ANALYZE's origins as part of VACUUM.  Can anyone see a reason not to
> > allow it?
> 
> The following patch allows analyze to be run inside a transaction.  
> Vacuum and vacuum analyze still can not be run in a transaction.

One change in this patch is that because analyze now runs in the outer
transaction, I can't clear the memory used to support each analyzed
relation.  Not sure if this is an issue.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Analyze on large changes...

От
Tom Lane
Дата:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> One change in this patch is that because analyze now runs in the outer
> transaction, I can't clear the memory used to support each analyzed
> relation.  Not sure if this is an issue.

Seems like a pretty serious (not to say fatal) objection to me.  Surely
you can fix that.
        regards, tom lane


Re: Analyze on large changes...

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > One change in this patch is that because analyze now runs in the outer
> > transaction, I can't clear the memory used to support each analyzed
> > relation.  Not sure if this is an issue.
> 
> Seems like a pretty serious (not to say fatal) objection to me.  Surely
> you can fix that.

OK, suggestions.  I know CommandCounterIncrement will not help.  Should
I do more pfree'ing?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Analyze on large changes...

От
Tom Lane
Дата:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
>> Seems like a pretty serious (not to say fatal) objection to me.  Surely
>> you can fix that.

> OK, suggestions.  I know CommandCounterIncrement will not help.  Should
> I do more pfree'ing?

No, retail pfree'ing is not a maintainable solution.  I was thinking
more along the lines of a MemoryContextResetAndDeleteChildren() on
whatever the active context is.  If that doesn't work straight off,
you might have to create a new working context and switch into it
before calling the analyze subroutine --- then deleting that context
would do the trick.
        regards, tom lane


Re: Analyze on large changes...

От
Bruce Momjian
Дата:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> >> Seems like a pretty serious (not to say fatal) objection to me.  Surely
> >> you can fix that.
>
> > OK, suggestions.  I know CommandCounterIncrement will not help.  Should
> > I do more pfree'ing?
>
> No, retail pfree'ing is not a maintainable solution.  I was thinking
> more along the lines of a MemoryContextResetAndDeleteChildren() on
> whatever the active context is.  If that doesn't work straight off,
> you might have to create a new working context and switch into it
> before calling the analyze subroutine --- then deleting that context
> would do the trick.

OK, how is this?

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
Index: src/backend/commands/analyze.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/commands/analyze.c,v
retrieving revision 1.35
diff -c -r1.35 analyze.c
*** src/backend/commands/analyze.c    24 May 2002 18:57:55 -0000    1.35
--- src/backend/commands/analyze.c    12 Jun 2002 03:59:45 -0000
***************
*** 156,170 ****
          elevel = DEBUG1;

      /*
-      * Begin a transaction for analyzing this relation.
-      *
-      * Note: All memory allocated during ANALYZE will live in
-      * TransactionCommandContext or a subcontext thereof, so it will all
-      * be released by transaction commit at the end of this routine.
-      */
-     StartTransactionCommand();
-
-     /*
       * Check for user-requested abort.    Note we want this to be inside a
       * transaction, so xact.c doesn't issue useless WARNING.
       */
--- 156,161 ----
***************
*** 177,186 ****
      if (!SearchSysCacheExists(RELOID,
                                ObjectIdGetDatum(relid),
                                0, 0, 0))
-     {
-         CommitTransactionCommand();
          return;
-     }

      /*
       * Open the class, getting only a read lock on it, and check
--- 168,174 ----
***************
*** 196,202 ****
              elog(WARNING, "Skipping \"%s\" --- only table or database owner can ANALYZE it",
                   RelationGetRelationName(onerel));
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 184,189 ----
***************
*** 211,217 ****
              elog(WARNING, "Skipping \"%s\" --- can not process indexes, views or special system tables",
                   RelationGetRelationName(onerel));
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 198,203 ----
***************
*** 222,228 ****
          strcmp(RelationGetRelationName(onerel), StatisticRelationName) == 0)
      {
          relation_close(onerel, AccessShareLock);
-         CommitTransactionCommand();
          return;
      }

--- 208,213 ----
***************
*** 283,289 ****
      if (attr_cnt <= 0)
      {
          relation_close(onerel, NoLock);
-         CommitTransactionCommand();
          return;
      }

--- 268,273 ----
***************
*** 370,378 ****
       * entries we made in pg_statistic.)
       */
      relation_close(onerel, NoLock);
-
-     /* Commit and release working memory */
-     CommitTransactionCommand();
  }

  /*
--- 354,359 ----
Index: src/backend/commands/vacuum.c
===================================================================
RCS file: /cvsroot/pgsql/src/backend/commands/vacuum.c,v
retrieving revision 1.226
diff -c -r1.226 vacuum.c
*** src/backend/commands/vacuum.c    24 May 2002 18:57:56 -0000    1.226
--- src/backend/commands/vacuum.c    12 Jun 2002 03:59:54 -0000
***************
*** 110,117 ****


  /* non-export function prototypes */
- static void vacuum_init(VacuumStmt *vacstmt);
- static void vacuum_shutdown(VacuumStmt *vacstmt);
  static List *getrels(const RangeVar *vacrel, const char *stmttype);
  static void vac_update_dbstats(Oid dbid,
                     TransactionId vacuumXID,
--- 110,115 ----
***************
*** 160,165 ****
--- 158,165 ----
  void
  vacuum(VacuumStmt *vacstmt)
  {
+     MemoryContext anl_context,
+                   old_context;
      const char *stmttype = vacstmt->vacuum ? "VACUUM" : "ANALYZE";
      List       *vrl,
                 *cur;
***************
*** 178,190 ****
       * user's transaction too, which would certainly not be the desired
       * behavior.
       */
!     if (IsTransactionBlock())
          elog(ERROR, "%s cannot run inside a BEGIN/END block", stmttype);

      /* Running VACUUM from a function would free the function context */
!     if (!MemoryContextContains(QueryContext, vacstmt))
          elog(ERROR, "%s cannot be executed from a function", stmttype);
!
      /*
       * Send info about dead objects to the statistics collector
       */
--- 178,190 ----
       * user's transaction too, which would certainly not be the desired
       * behavior.
       */
!     if (vacstmt->vacuum && IsTransactionBlock())
          elog(ERROR, "%s cannot run inside a BEGIN/END block", stmttype);

      /* Running VACUUM from a function would free the function context */
!     if (vacstmt->vacuum && !MemoryContextContains(QueryContext, vacstmt))
          elog(ERROR, "%s cannot be executed from a function", stmttype);
!
      /*
       * Send info about dead objects to the statistics collector
       */
***************
*** 203,215 ****
                                          ALLOCSET_DEFAULT_INITSIZE,
                                          ALLOCSET_DEFAULT_MAXSIZE);

      /* Build list of relations to process (note this lives in vac_context) */
      vrl = getrels(vacstmt->relation, stmttype);

      /*
!      * Start up the vacuum cleaner.
       */
!     vacuum_init(vacstmt);

      /*
       * Process each selected relation.    We are careful to process each
--- 203,264 ----
                                          ALLOCSET_DEFAULT_INITSIZE,
                                          ALLOCSET_DEFAULT_MAXSIZE);

+     if (vacstmt->analyze && !vacstmt->vacuum)
+         anl_context = AllocSetContextCreate(QueryContext,
+                                             "Analyze",
+                                             ALLOCSET_DEFAULT_MINSIZE,
+                                             ALLOCSET_DEFAULT_INITSIZE,
+                                             ALLOCSET_DEFAULT_MAXSIZE);
+
      /* Build list of relations to process (note this lives in vac_context) */
      vrl = getrels(vacstmt->relation, stmttype);

      /*
!      *        Formerly, there was code here to prevent more than one VACUUM from
!      *        executing concurrently in the same database.  However, there's no
!      *        good reason to prevent that, and manually removing lockfiles after
!      *        a vacuum crash was a pain for dbadmins.  So, forget about lockfiles,
!      *        and just rely on the locks we grab on each target table
!      *        to ensure that there aren't two VACUUMs running on the same table
!      *        at the same time.
!      *
!      *        The strangeness with committing and starting transactions in the
!      *        init and shutdown routines is due to the fact that the vacuum cleaner
!      *        is invoked via an SQL command, and so is already executing inside
!      *        a transaction.    We need to leave ourselves in a predictable state
!      *        on entry and exit to the vacuum cleaner.  We commit the transaction
!      *        started in PostgresMain() inside vacuum_init(), and start one in
!      *        vacuum_shutdown() to match the commit waiting for us back in
!      *        PostgresMain().
       */
!     if (vacstmt->vacuum)
!     {
!         if (vacstmt->relation == NULL)
!         {
!             /*
!              * Compute the initially applicable OldestXmin and FreezeLimit
!              * XIDs, so that we can record these values at the end of the
!              * VACUUM. Note that individual tables may well be processed with
!              * newer values, but we can guarantee that no (non-shared)
!              * relations are processed with older ones.
!              *
!              * It is okay to record non-shared values in pg_database, even though
!              * we may vacuum shared relations with older cutoffs, because only
!              * the minimum of the values present in pg_database matters.  We
!              * can be sure that shared relations have at some time been
!              * vacuumed with cutoffs no worse than the global minimum; for, if
!              * there is a backend in some other DB with xmin = OLDXMIN that's
!              * determining the cutoff with which we vacuum shared relations,
!              * it is not possible for that database to have a cutoff newer
!              * than OLDXMIN recorded in pg_database.
!              */
!             vacuum_set_xid_limits(vacstmt, false,
!                                   &initialOldestXmin, &initialFreezeLimit);
!         }
!
!         /* matches the StartTransaction in PostgresMain() */
!         CommitTransactionCommand();
!     }

      /*
       * Process each selected relation.    We are careful to process each
***************
*** 225,305 ****
          if (vacstmt->vacuum)
              vacuum_rel(relid, vacstmt, RELKIND_RELATION);
          if (vacstmt->analyze)
              analyze_rel(relid, vacstmt);
      }

      /* clean up */
!     vacuum_shutdown(vacstmt);
! }
!
! /*
!  *    vacuum_init(), vacuum_shutdown() -- start up and shut down the vacuum cleaner.
!  *
!  *        Formerly, there was code here to prevent more than one VACUUM from
!  *        executing concurrently in the same database.  However, there's no
!  *        good reason to prevent that, and manually removing lockfiles after
!  *        a vacuum crash was a pain for dbadmins.  So, forget about lockfiles,
!  *        and just rely on the locks we grab on each target table
!  *        to ensure that there aren't two VACUUMs running on the same table
!  *        at the same time.
!  *
!  *        The strangeness with committing and starting transactions in the
!  *        init and shutdown routines is due to the fact that the vacuum cleaner
!  *        is invoked via an SQL command, and so is already executing inside
!  *        a transaction.    We need to leave ourselves in a predictable state
!  *        on entry and exit to the vacuum cleaner.  We commit the transaction
!  *        started in PostgresMain() inside vacuum_init(), and start one in
!  *        vacuum_shutdown() to match the commit waiting for us back in
!  *        PostgresMain().
!  */
! static void
! vacuum_init(VacuumStmt *vacstmt)
! {
!     if (vacstmt->vacuum && vacstmt->relation == NULL)
      {
!         /*
!          * Compute the initially applicable OldestXmin and FreezeLimit
!          * XIDs, so that we can record these values at the end of the
!          * VACUUM. Note that individual tables may well be processed with
!          * newer values, but we can guarantee that no (non-shared)
!          * relations are processed with older ones.
!          *
!          * It is okay to record non-shared values in pg_database, even though
!          * we may vacuum shared relations with older cutoffs, because only
!          * the minimum of the values present in pg_database matters.  We
!          * can be sure that shared relations have at some time been
!          * vacuumed with cutoffs no worse than the global minimum; for, if
!          * there is a backend in some other DB with xmin = OLDXMIN that's
!          * determining the cutoff with which we vacuum shared relations,
!          * it is not possible for that database to have a cutoff newer
!          * than OLDXMIN recorded in pg_database.
!          */
!         vacuum_set_xid_limits(vacstmt, false,
!                               &initialOldestXmin, &initialFreezeLimit);
!     }
!
!     /* matches the StartTransaction in PostgresMain() */
!     CommitTransactionCommand();
! }

! static void
! vacuum_shutdown(VacuumStmt *vacstmt)
! {
!     /* on entry, we are not in a transaction */
!
!     /* matches the CommitTransaction in PostgresMain() */
!     StartTransactionCommand();

!     /*
!      * If we did a database-wide VACUUM, update the database's pg_database
!      * row with info about the transaction IDs used, and try to truncate
!      * pg_clog.
!      */
!     if (vacstmt->vacuum && vacstmt->relation == NULL)
!     {
!         vac_update_dbstats(MyDatabaseId,
!                            initialOldestXmin, initialFreezeLimit);
!         vac_truncate_clog(initialOldestXmin, initialFreezeLimit);
      }

      /*
--- 274,317 ----
          if (vacstmt->vacuum)
              vacuum_rel(relid, vacstmt, RELKIND_RELATION);
          if (vacstmt->analyze)
+         {
+             /* If we vacuumed, use new transaction for analyze. */
+             if (vacstmt->vacuum)
+                 StartTransactionCommand();
+             else
+                 old_context = MemoryContextSwitchTo(anl_context);
+
              analyze_rel(relid, vacstmt);
+
+             if (vacstmt->vacuum)
+                 CommitTransactionCommand();
+             else
+             {
+                 MemoryContextResetAndDeleteChildren(anl_context);
+                 MemoryContextSwitchTo(old_context);
+             }
+         }
      }

      /* clean up */
!     if (vacstmt->vacuum)
      {
!         /* on entry, we are not in a transaction */

!         /* matches the CommitTransaction in PostgresMain() */
!         StartTransactionCommand();

!         /*
!          * If we did a database-wide VACUUM, update the database's pg_database
!          * row with info about the transaction IDs used, and try to truncate
!          * pg_clog.
!          */
!         if (vacstmt->relation == NULL)
!         {
!             vac_update_dbstats(MyDatabaseId,
!                                initialOldestXmin, initialFreezeLimit);
!             vac_truncate_clog(initialOldestXmin, initialFreezeLimit);
!         }
      }

      /*
***************
*** 309,314 ****
--- 321,330 ----
       */
      MemoryContextDelete(vac_context);
      vac_context = NULL;
+
+     if (vacstmt->analyze && !vacstmt->vacuum)
+         MemoryContextDelete(anl_context);
+
  }

  /*