Обсуждение: Vaccuum allows read access?
I could have sworn I saw a post where you (Tom Lane) figured out how to get tables to be read-only while performing a vaccuum rather than being completely locked. Is this true? Is it in the -PATCHES branch? thanks, -- -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org] "I have the heart of a child; I keep it in a jar on my desk."
> I could have sworn I saw a post where you (Tom Lane) figured out how > to get tables to be read-only while performing a vaccuum rather than > being completely locked. > > Is this true? Is it in the -PATCHES branch? In the current sources, analyze allows read-access. I don't think vacuum allows any other access because the rows are moving in the file. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
On Fri, 21 Jul 2000, Alfred Perlstein wrote: > I could have sworn I saw a post where you (Tom Lane) figured out how > to get tables to be read-only while performing a vaccuum rather than > being completely locked. > > Is this true? Is it in the -PATCHES branch? the last change that I recall was by Bruce ... he got it so that it only lock'd one table at a time and only held that lock for as long as was required instead of for the duration of the vacuum ...
Bruce Momjian <pgman@candle.pha.pa.us> writes: > In the current sources, analyze allows read-access. I don't think > vacuum allows any other access because the rows are moving in the file. VACUUM *must* have exclusive lock. ANALYZE actually is only a reader (AccessShareLock) and does not lock out either reading or writing in current sources. regards, tom lane
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > In the current sources, analyze allows read-access. I don't think > > vacuum allows any other access because the rows are moving in the file. > > VACUUM *must* have exclusive lock. > > ANALYZE actually is only a reader (AccessShareLock) and does not lock > out either reading or writing in current sources. Some related issue though. On the phone we discussed about the btree splitpage problems and you said that the current btree implementation is optimized for concurrent read and insert access, not so for concurrent deletes. This might get to be a problem with the overwriting storage manager. If it wants to reuse space of outdated tuplesin the main heap, it needs to delete index tuples as well. Isn't that in conflict with the btree design? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
JanWieck@t-online.de (Jan Wieck) writes: > On the phone we discussed about the btree splitpage problems > and you said that the current btree implementation is > optimized for concurrent read and insert access, not so for > concurrent deletes. > This might get to be a problem with the overwriting storage > manager. If it wants to reuse space of outdated tuples in > the main heap, it needs to delete index tuples as well. Isn't > that in conflict with the btree design? Yes, it's going to be an issue. nbtscan.c only handles deletions issued by the current backend, and thus is basically useful only for VACUUM. We could change bt_restscan so that it tries scanning left as well as right for the current item (it need only look as far left as the start of the current page). But that doesn't help if someone's deleted the index item that was your current item. A simple solution is to hold onto a read lock for the current page of a scan throughout the scan, rather than releasing and regrabbing it as we do now. That might reduce the available concurrency quite a bit, however. The worst case would be something like a CURSOR that's been left sitting open --- it could keep the page locked for a long time. Another way is to change indexscans so that they fetch the referenced main tuple directly, rather than simply handing back a TID for it, and apply the HeapTupleSatisfies test immediately. Then we could avoid having a scan stop on a tuple that might be a candidate to be deleted. Would save some call overhead and lock/unlock overhead too. A bigger problem is that this is all just for btrees. What about rtrees and hash indexes? (Not to mention GIST, although I suspect that's dead code...) regards, tom lane