Re: Large Objects in serializable transaction question

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Large Objects in serializable transaction question
Дата
Msg-id 22383.1058278521@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Large Objects in serializable transaction question  ( "Andreas Schönbach" <andreasschoenbach@web.de>)
Список pgsql-general
=?iso-8859-1?Q? "Andreas=20Sch=F6nbach" ?= <andreasschoenbach@web.de> writes:
> I have a testprogram (using libpq) reading data from a cursor and large objects according to the result of the
cursor.The cursor is opened in a serializable transaction. 
> Just for test reasons I know tried the following:
> I started the test program that reads the data from the cursor and that reads the large objects according to the
resultof the fetch. While the test was running I now was dropping all large objects in a parallel session. Since I am
usinga serializable transaction in the test program I still should be able to read all the large objects, even if I
dropthem in a parallel session. But it does not work. I get an error, that the large object can't be opened. 

Yeah.  The large object operations use SnapshotNow (effectively
read-committed) rather than looking at the surrounding transaction's
snapshot.  This is a bug IMHO, but no one's got round to working on
it.  (It's not entirely clear how the LO functions could access the
appropriate snapshot.)

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Andrew Sullivan
Дата:
Сообщение: Re: insert bug
Следующее
От: Shridhar Daithankar
Дата:
Сообщение: Re: Billions of records?