Обсуждение: slow, long-running 'commit prepared'
I have a pg instance with 700GB of data, almost all of which is in one table. When I PREPARE and then COMMIT PREPARED a transaction that reads & writes to a large fraction of that data (about 10%, effectively randomly chosen rows and so every file in the table is modified), the COMMIT PREPARED sometimes takes a very long time--2 to 5 minutes. Is this expected? Is it possible for the commit to hang waiting on some lock for this long? I haven't yet been able to examine pg_locks during this 5 minute delay. There is very little concurrency--only a couple of open sessions--when the COMMIT PREPARED is issued. Thanks for your help, John
On Tue, Nov 25, 2008 at 07:17:20PM -0800, John Smith wrote: > I have a pg instance with 700GB of data, almost all of which is in one > table. When I PREPARE and then COMMIT PREPARED a transaction that > reads & writes to a large fraction of that data (about 10%, > effectively randomly chosen rows and so every file in the table is > modified), the COMMIT PREPARED sometimes takes a very long time--2 to > 5 minutes. Is this expected? Is it possible for the commit to hang > waiting on some lock for this long? I haven't yet been able to examine > pg_locks during this 5 minute delay. There is very little > concurrency--only a couple of open sessions--when the COMMIT PREPARED > is issued. You could be I/O bound. Have a look at iostat and sar. A -- Andrew Sullivan ajs@crankycanuck.ca
Thanks for the quick response, Andrew. How could the 'commit prepared' be I/O bound? Isn't all the I/O accomplished during 'prepare transaction', leaving only a tiny bit of data to write at commit? > > You could be I/O bound. Have a look at iostat and sar. > > A > -- > Andrew Sullivan > ajs(at)crankycanuck(dot)ca > > > > On Tue, Nov 25, 2008 at 7:17 PM, John Smith <sodgodofall@gmail.com> wrote: > > I have a pg instance with 700GB of data, almost all of which is in one > > table. When I PREPARE and then COMMIT PREPARED a transaction that > > reads & writes to a large fraction of that data (about 10%, > > effectively randomly chosen rows and so every file in the table is > > modified), the COMMIT PREPARED sometimes takes a very long time--2 to > > 5 minutes. Is this expected? Is it possible for the commit to hang > > waiting on some lock for this long? I haven't yet been able to examine > > pg_locks during this 5 minute delay. There is very little > > concurrency--only a couple of open sessions--when the COMMIT PREPARED > > is issued. > > > > Thanks for your help, > > John > > >
On Tue, Nov 25, 2008 at 08:05:36PM -0800, John Smith wrote: > Thanks for the quick response, Andrew. How could the 'commit prepared' > be I/O bound? Isn't all the I/O accomplished during 'prepare > transaction', leaving only a tiny bit of data to write at commit? No. When you do PREPARE TRANSACTION you have to save a whole lot of state. And when you then do COMMIT PREPARED, you say, "Lose that state in favour of this new state." All that work has to be done again, in the other direction. It's not for nothing that people think 2PC is a heavyweight and expensive system. :-( A -- Andrew Sullivan ajs@crankycanuck.ca
"John Smith" <sodgodofall@gmail.com> writes: > I have a pg instance with 700GB of data, almost all of which is in one > table. When I PREPARE and then COMMIT PREPARED a transaction that > reads & writes to a large fraction of that data (about 10%, > effectively randomly chosen rows and so every file in the table is > modified), the COMMIT PREPARED sometimes takes a very long time--2 to > 5 minutes. Is this expected? It's impossible to say without knowing more about what the transaction did. But one piece of data you could check easily is the size of the 2PC state file (look into $PGDATA/pg_twophase/). regards, tom lane
The transaction where COMMIT PREPARED was slow basically did the following 20 times: - create a child table T1 as select from another child table T2 - drops child table T2 - renames T1 to T2 What here would cause the 2PC state file to grow large? In my rough experiments, its size seems constant in the number of row locks held, and linear in the number of table locks held. Is there any state in that file that grows linearly with the size of the data touched in the transaction? On Wed, Nov 26, 2008 at 5:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "John Smith" <sodgodofall@gmail.com> writes: >> I have a pg instance with 700GB of data, almost all of which is in one >> table. When I PREPARE and then COMMIT PREPARED a transaction that >> reads & writes to a large fraction of that data (about 10%, >> effectively randomly chosen rows and so every file in the table is >> modified), the COMMIT PREPARED sometimes takes a very long time--2 to >> 5 minutes. Is this expected? > > It's impossible to say without knowing more about what the transaction > did. But one piece of data you could check easily is the size of the > 2PC state file (look into $PGDATA/pg_twophase/). > > regards, tom lane >
"John Smith" <sodgodofall@gmail.com> writes: > The transaction where COMMIT PREPARED was slow basically did the > following 20 times: > - create a child table T1 as select from another child table T2 > - drops child table T2 > - renames T1 to T2 > What here would cause the 2PC state file to grow large? In my rough > experiments, its size seems constant in the number of row locks held, > and linear in the number of table locks held. Is there any state in > that file that grows linearly with the size of the data touched in the > transaction? It was the number of locks I was speculating about. But that pattern shouldn't result in more than 20 or so locks, so it's still not clear what's happening. In any case, a prepared xact is already holding all the locks it needs, so COMMIT PREPARED shouldn't have to block on anything. Have you tried watching the committing process with top, vmstat, "select * from pg_stat_activity", etc? That should at least give you a clue whether it's CPU-bound or IO-bound or (against the above theory) blocked on something. regards, tom lane