Обсуждение: Savepoints

Поиск
Список
Период
Сортировка

Savepoints

От
Bruce Momjian
Дата:
I have talked in the past about a possible implementation of
savepoints/nested transactions.  I would like to more formally outline
my ideas below.

We have talked about using WAL for such a purpose, but that requires WAL
files to remain for the life of a transaction, which seems unacceptable.
Other database systems do that, and it is a pain for administrators.  I
realized we could do some sort of WAL compaction, but that seems quite
complex too.

Basically, under my plan, WAL would be unchanged.  WAL's function is
crash recovery, and it would retain that.  There would also be no
on-disk changes.  I would use the command counter in certain cases to
identify savepoints.

My idea is to keep savepoint undo information in a private area per
backend, either in memory or on disk.  We can either save the
relid/tids of modified rows, or if there are too many, discard the
saved ones and just remember the modified relids.  On rollback to save
point, either clear up the modified relid/tids, or sequential scan
through the relid and clear up all the tuples that have our transaction
id and have command counters that are part of the undo savepoint.

It seems marking undo savepoint rows with a fixed aborted transaction id
would be the easiest solution.

Of course, we only remember modified rows when we are in savepoints, and
only undo them when we rollback to a savepoint.  Transaction processing
remains the same.

There is no reason for other backend to be able to see savepoint undo
information, and keeping it private greatly simplifies the
implementation.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Savepoints

От
Haroldo Stenger
Дата:
Bruce Momjian wrote:
> 
> Basically, under my plan, WAL would be unchanged.  WAL's function is
> crash recovery, and it would retain that.  There would also be no
> on-disk changes.  I would use the command counter in certain cases to
> identify savepoints.

This is a pointer to the previous August thread, where your original proposal
was posted, and some WAL/not WAL discussion took place. Just not to repeat the
already mentioned points. Oh, it's google archive just for fun, and to not
overload hub.org ;-)


http://groups.google.com/groups?hl=en&threadm=200108050432.f754Wdo11696%40candle.pha.pa.us&rnum=1&prev=/groups%3Fhl%3Den%26selm%3D200108050432.f754Wdo11696%2540candle.pha.pa.us

Regards,
Haroldo.


Re: Savepoints

От
"Mikheev, Vadim"
Дата:
> I have talked in the past about a possible implementation of
> savepoints/nested transactions.  I would like to more formally outline
> my ideas below.

Well, I would like to do the same -:)

> ...
> There is no reason for other backend to be able to see savepoint undo
> information, and keeping it private greatly simplifies the
> implementation.

Yes... and requires additional memory/disk space: we keep old records
in data files and we'll store them again...

How about: use overwriting smgr + put old records into rollback
segments - RS - (you have to keep them somewhere till TX's running
anyway) + use WAL only as REDO log (RS will be used to rollback TX'
changes and WAL will be used for RS/data files recovery).
Something like what Oracle does.

Vadim


Re: Savepoints

От
Bruce Momjian
Дата:
Mikheev, Vadim wrote:
> > I have talked in the past about a possible implementation of
> > savepoints/nested transactions.  I would like to more formally outline
> > my ideas below.
> 
> Well, I would like to do the same -:)

Good.

> > ...
> > There is no reason for other backend to be able to see savepoint undo
> > information, and keeping it private greatly simplifies the
> > implementation.
> 
> Yes... and requires additional memory/disk space: we keep old records
> in data files and we'll store them again...

I was suggesting keeping only relid/tid or in some cases only relid. 
Seems like one or the other will fit all needs:  relid/tid for update of
a few rows, relid for many rows updated in the same table.  I saw no
need to store the actual data.

> How about: use overwriting smgr + put old records into rollback
> segments - RS - (you have to keep them somewhere till TX's running
> anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> changes and WAL will be used for RS/data files recovery).
> Something like what Oracle does.

Why record the old data rows rather than the tids?  While the
transaction is running, the rows can't be moved anyway.  Also, why store
them in a shared area.  That has additional requirements because one old
transaction can require all transactions to keep their stuff around. 
Why not just make it a private data file for each backend?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Savepoints

От
Bruce Momjian
Дата:
> > There is no reason for other backend to be able to see savepoint undo
> > information, and keeping it private greatly simplifies the
> > implementation.
> 
> Yes... and requires additional memory/disk space: we keep old records
> in data files and we'll store them again...
> 
> How about: use overwriting smgr + put old records into rollback
> segments - RS - (you have to keep them somewhere till TX's running
> anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> changes and WAL will be used for RS/data files recovery).
> Something like what Oracle does.

I am sorry.  I see what you are saying now.  I missed the words
"overwriting smgr".  You are suggesting going to an overwriting storage
manager.  Is this to be done only because of savepoints.  Doesn't seem
worth it when I have a possible solution without such a drastic change.
Also, overwriting storage manager will require MVCC to read through
there to get accurate MVCC visibility, right?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Savepoints

От
"Mikheev, Vadim"
Дата:
> > How about: use overwriting smgr + put old records into rollback
> > segments - RS - (you have to keep them somewhere till TX's running
> > anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> > changes and WAL will be used for RS/data files recovery).
> > Something like what Oracle does.
> 
> I am sorry. I see what you are saying now. I missed the words

And I'm sorry for missing your notes about storing relid+tid only.

> "overwriting smgr". You are suggesting going to an overwriting
> storage manager. Is this to be done only because of savepoints.

No. One point I made a few monthes ago (and never got objections)
is - why to keep old data in data files sooooo long?
Imagine long running TX (eg pg_dump). Why other TX-s must read
again and again completely useless (for them) old data we keep
for pg_dump?

> Doesn't seem worth it when I have a possible solution without
> such a drastic change.
> Also, overwriting storage manager will require MVCC to read
> through there to get accurate MVCC visibility, right?

Right... just like now non-overwriting smgr requires *ALL*
TX-s to read old data in data files. But with overwriting smgr
TX will read RS only when it is required and as far (much) as
it is required.

Simple solutions are not always the best ones.
Compare Oracle and InterBase. Both have MVCC.
Smgr-s are different. What RDBMS is more cool?
Why doesn't Oracle use more simple non-overwriting smgr
(as InterBase... and we do)?

Vadim


Re: Savepoints

От
Don Baccus
Дата:
Bruce Momjian wrote:


> I am sorry.  I see what you are saying now.  I missed the words
> "overwriting smgr".  You are suggesting going to an overwriting storage
> manager.


Overwriting storage managers don't suffer from unbounded growth of 
datafiles until garbage collection (vacuum) is performed.  In fact, 
there's no need for a vacuum-style utility.  The rollback segments only 
need to keep around enough past history to rollback transactions that 
are executing.

Of course, then the size of your transactions are limited by the size of 
your rollback segments, which in Oracle are fixed in length when you 
build your database (there are ways to change this when you figure out 
that you didn't pick a good number when creating it).
>Is this to be done only because of savepoints.

Not in traditional storage managers such as Oracle uses.  The complexity 
of managing visibility and the like are traded off against the fact that 
you're not stuck ever needing to garbage collect a database that 
occupies a roomful of disks.

It's a trade-off.  PG's current storage manager seems to work awfully 
well in a lot of common database scenarios, and Tom's new vacuum is 
meant to help mitigate against the drawbacks.   But overwriting storage 
managers certainly have their advantages, too.
>  Doesn't seem

> worth it when I have a possible solution without such a drastic change.
> Also, overwriting storage manager will require MVCC to read through
> there to get accurate MVCC visibility, right?


Yep...

-- 
Don Baccus
Portland, OR
http://donb.photo.net, http://birdnotes.net, http://openacs.org



Re: Savepoints

От
"Hiroshi Inoue"
Дата:
> -----Original Message-----
> From: Mikheev, Vadim
> 
> How about: use overwriting smgr + put old records into rollback
> segments - RS - (you have to keep them somewhere till TX's running
> anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> changes and WAL will be used for RS/data files recovery).
> Something like what Oracle does.

As long as we use no overwriting manager
1) Rollback(data) isn't needed in case of a db crash.
2) Rollback(data) isn't needed to cancal a transaction entirely.
3) We don't need to mind the transaction size so much.

We can't use the db any longer if a REDO recovery fails now.
Under overwriting smgr we can't use the db any longer either
if rollback fails. How could PG be not less reliable than now ?

regards,
Hiroshi Inoue


Re: Savepoints

От
"Mikheev, Vadim"
Дата:
> > How about: use overwriting smgr + put old records into rollback
> > segments - RS - (you have to keep them somewhere till TX's running
> > anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> > changes and WAL will be used for RS/data files recovery).
> > Something like what Oracle does.
> 
> As long as we use no overwriting manager
> 1) Rollback(data) isn't needed in case of a db crash.
> 2) Rollback(data) isn't needed to cancal a transaction entirely.

-1) But vacuum must read a huge amount of data to remove dirt.
-2) But TX-s must read data they are not interested at all.

> 3) We don't need to mind the transaction size so much.

-3) The same with overwriting smgr and WAL used *only as REDO log*:
we are not required to keep WAL files for duration of transaction
- as soon as server knows that changes logged in some WAL file
applied to data files and RS on disk (and archived, for WAL-based
BAR) that file may be reused/removed. Old data will still occupy
space in RS but their space in data files will be available
for reuse.

> We can't use the db any longer if a REDO recovery fails now.

Reset WAL and use/dump it. Annoying? Agreed. Fix bugs and/or
use good RAM - whatever caused problem with restart.

> Under overwriting smgr we can't use the db any longer either
> if rollback fails.

Why should it fail? Bugs? Fix them.

> How could PG be not less reliable than now ?

Is today' RG more reliable than Oracle, Informix, DB2?

Vadim


Re: Savepoints

От
Bruce Momjian
Дата:
OK, I have had time to think about this, and I think I can put the two
proposals into perspective.  I will use Vadim's terminology.

In our current setup, rollback/undo data is kept in the same file as our
live data.  This data is used for two purposes, one, for rollback of
transactions, and perhaps subtransactions in the future, and second, for
MVCC visibility for backends making changes.

So, it seems the real question is whether a database modification should
write the old data into a separate rollback segment and modify the heap
data, or just create a new row and require the old row to be removed
later by vacuum.

Let's look at this behavior without MVCC.  In such cases, if someone
tries to read a modified row, it will block and wait for the modifying
backend to commit or rollback, when it will then continue.  In such
cases, there is no reason for the waiting transaction to read the old
data in the redo segment because it can't continue anyway.

Now, with MVCC, the backend has to read through the redo segment to get
the original data value for that row.

Now, while rollback segments do help with cleaning out old UPDATE rows,
how does it improve DELETE performance?  Seems it would just mark it as
expired like we do now.

One objection I always had to redo segments was that if I start a
transaction in the morning and walk away, none of the redo segments can
be recycled.  I was going to ask if we can force some type of redo
segment compaction to keep old active rows and delete rows no longer
visible to any transaction.  However, I now realize that our VACUUM has
the same problem.  Tuples with XID >= GetOldestXmin() are not recycled,
meaning we have this problem in our current implementation too.  (I
wonder if our vacuum could be smarter about knowing which rows are
visible, perhaps by creating a sorted list of xid's and doing a binary
search on the list to determine visibility.)

So, I guess the issue is, do we want to keep redo information in the
main table, or split it out into redo segments.  Certainly we have to
eliminate the Oracle restrictions that redo segment size is fixed at
install time.

The advantages of a redo segment is that hopefully we don't have
transactions reading through irrelevant undo information.  The
disadvantage is that we now have redo information grouped into table
files where a sequential scan can be performed.  (Index scans of redo
info are a performance problem currently.)  We would have to somehow
efficiently access redo information grouped into the redo segments. 
Perhaps a hash based in relid would help here.  Another disadvantage is
concurrency.  When we start modifying heap data in place, we have to
prevent other backends from seeing that modification while we move the
old data to the redo segment.

I guess my feeling is that if we can get vacuum to happen automatically,
how is our current non-overwriting storage manager different from redo
segments?

One big advantage of redo segments would be that right now, if someone
updates a row repeatedly, there are lots of heap versions of the row
that are difficult to shrink in the table, while if they are in the redo
segments, we can more efficiently remove them, and there is only on heap
row.

How is recovery handled with rollback segments?  Do we write old and new
data to WAL?   We just write new data to WAL now, right?  Do we fsync
rollback segments?

Have I outlined this accurately?

---------------------------------------------------------------------------

Mikheev, Vadim wrote:
> > > How about: use overwriting smgr + put old records into rollback
> > > segments - RS - (you have to keep them somewhere till TX's running
> > > anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> > > changes and WAL will be used for RS/data files recovery).
> > > Something like what Oracle does.
> > 
> > I am sorry. I see what you are saying now. I missed the words
> 
> And I'm sorry for missing your notes about storing relid+tid only.
> 
> > "overwriting smgr". You are suggesting going to an overwriting
> > storage manager. Is this to be done only because of savepoints.
> 
> No. One point I made a few monthes ago (and never got objections)
> is - why to keep old data in data files sooooo long?
> Imagine long running TX (eg pg_dump). Why other TX-s must read
> again and again completely useless (for them) old data we keep
> for pg_dump?
> 
> > Doesn't seem worth it when I have a possible solution without
> > such a drastic change.
> > Also, overwriting storage manager will require MVCC to read
> > through there to get accurate MVCC visibility, right?
> 
> Right... just like now non-overwriting smgr requires *ALL*
> TX-s to read old data in data files. But with overwriting smgr
> TX will read RS only when it is required and as far (much) as
> it is required.
> 
> Simple solutions are not always the best ones.
> Compare Oracle and InterBase. Both have MVCC.
> Smgr-s are different. What RDBMS is more cool?
> Why doesn't Oracle use more simple non-overwriting smgr
> (as InterBase... and we do)?
> 
> Vadim
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Savepoints

От
Hiroshi Inoue
Дата:
"Mikheev, Vadim" wrote:
> 
> > > How about: use overwriting smgr + put old records into rollback
> > > segments - RS - (you have to keep them somewhere till TX's running
> > > anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> > > changes and WAL will be used for RS/data files recovery).
> > > Something like what Oracle does.
> >
> > As long as we use no overwriting manager
> > 1) Rollback(data) isn't needed in case of a db crash.
> > 2) Rollback(data) isn't needed to cancal a transaction entirely.
> 
> -1) But vacuum must read a huge amount of data to remove dirt.
> -2) But TX-s must read data they are not interested at all.
> 
> > 3) We don't need to mind the transaction size so much.
> 
> -3) The same with overwriting smgr and WAL used *only as REDO log*:

The larger RS becomes the longer it would take time to cancel
the transaction whereas it is executed in a momemnt under no
overwriting smgr and for example if RS exhausted all disk space
is PG really safe ? Other backends would also fail because they
couldn't write RS any mode. Many transactions would execute
UNDO operations simultaneously but there's no space to write
WALs (UNDO operations must be written to WAL also) and PG
system would abort. And could PG restart under such situations ?
Even though there's a way to recover from the situation, I
think we should avoid such dangerous situations from the
first. Basically recovery operations should never fail.
> 
> > We can't use the db any longer if a REDO recovery fails now.
> 
> Reset WAL and use/dump it. Annoying? Agreed. Fix bugs and/or
> use good RAM - whatever caused problem with restart.

As I already mentioned recovery operations should never fail.
> 
> > Under overwriting smgr we can't use the db any longer either
> > if rollback fails.
> 
> Why should it fail? Bugs? Fix them.

Rollback operations are executed much more often than
REDO recovery and it is hard to fix such bugs once PG
was released. Most people in such troubles have no
time to persue the cause. In reality I replied to the
PG restart troubles twice (with --wal-debug and pg_resetxlog
suggestions ) in Japan but got no further replies.

> 
> > How could PG be not less reliable than now ?
> 
> Is today' RG more reliable than Oracle, Informix, DB2?

I have never been and would never be optiomistic
about recovery. Is 7.1 more reliable than 7.0 from the
recovery POV ? I see no reason why overwriting smgr is
more relaible than no overwriting smgr as for recovery.

regards,
Hiroshi Inoue


Re: Savepoints

От
"Zeugswetter Andreas SB SD"
Дата:
Vadim wrote:
> How about: use overwriting smgr + put old records into rollback
> segments - RS - (you have to keep them somewhere till TX's running
> anyway) + use WAL only as REDO log (RS will be used to rollback TX'
> changes and WAL will be used for RS/data files recovery).
> Something like what Oracle does.

We have all the info we need in WAL and in the old rows,
why would you want to write them to RS ?
You only need RS for overwriting smgr.

Andreas


Re: Savepoints

От
"Zeugswetter Andreas SB SD"
Дата:
> Now, with MVCC, the backend has to read through the redo segment to get

You mean rollback segment, but ...

> the original data value for that row.

Will only need to be looked up if the row is currently beeing modified by 
a not yet comitted txn (at least in the default read committed mode)  

> 
> Now, while rollback segments do help with cleaning out old UPDATE rows,
> how does it improve DELETE performance?  Seems it would just mark it as
> expired like we do now.

delete would probably be: 
1. mark original deleted and write whole row to RS

I don't think you would like to mix looking up deleted rows in heap
but updated rows in RS

Andreas

PS: not that I like overwrite with MVCC now
If you think of VACUUM as garbage collection PG is highly trendy with
the non-overwriting smgr.


Re: Savepoints

От
"Zeugswetter Andreas SB SD"
Дата:
> > > How about: use overwriting smgr + put old records into rollback
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > segments - RS - (you have to keep them somewhere till TX's running
> > > anyway) + use WAL only as REDO log (RS will be used to 
> rollback TX'
> > > changes and WAL will be used for RS/data files recovery).
> > > Something like what Oracle does.
> > 
> > We have all the info we need in WAL and in the old rows,
> > why would you want to write them to RS ?
> > You only need RS for overwriting smgr.
> 
> This is what I'm saying - implement Overwriting smgr...

Yes I am sorry, I am catching up on email and had not read Bruce's 
comment (nor yours correctly) :-(

I was also long in the pro overwriting camp, because I am used to 
non MVCC dbs like DB/2 and Informix. (which I like very much) 
But I am starting to doubt that overwriting is really so good for
an MVCC db. And I don't think PG wants to switch to non MVCC :-)

Imho it would only need a much more aggressive VACUUM backend.
(aka garbage collector :-) Maybe It could be designed to sniff the 
redo log (buffer) to get a hint at what to actually clean out next.

Andreas


Re: Savepoints

От
Bruce Momjian
Дата:
Zeugswetter Andreas SB SD wrote:
> 
> > Now, with MVCC, the backend has to read through the redo segment to get
> 
> You mean rollback segment, but ...


Sorry, yes.  I get redo/undo/rollback mixed up sometimes. :-)

> > the original data value for that row.
> 
> Will only need to be looked up if the row is currently beeing modified by 
> a not yet comitted txn (at least in the default read committed mode)  

Uh, not really.  The transaction may have completed after my transaction
started, meaning even though it looks like it is committed, to me, it is
not visible.  Most MVCC visibility will require undo lookup.

> 
> > 
> > Now, while rollback segments do help with cleaning out old UPDATE rows,
> > how does it improve DELETE performance?  Seems it would just mark it as
> > expired like we do now.
> 
> delete would probably be: 
> 1. mark original deleted and write whole row to RS
> 
> I don't think you would like to mix looking up deleted rows in heap
> but updated rows in RS

Yes, so really the overwriting is only a big win for UPDATE.  Right now,
UPDATE is DELETE/INSERT, and that DELETE makes MVCC happy. :-)

My whole goal was to simplify this so we can see the differences.


> PS: not that I like overwrite with MVCC now
> If you think of VACUUM as garbage collection PG is highly trendy with
> the non-overwriting smgr.

Yes, that is basically what it is now, a garbage collector that collects
in heap rather than in undo.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026