Обсуждение: progress of long running operation
Is there any way to get some insight into the progress of: insert into foo select distinct on (...) from bar where... It's got to with importing some legacy data, which has no proper primary key, and duplicates, and garbage that won't be accepted.And there's 30,000,000 rows, and I'm running on a slow disk for testing--so I know this is going to be painfullyslow. But after a few hours I'd like to know if it's going to finish overnight, or if it will take so long thatI need to look at alternate approaches. (I upped my shared buffers & work mem, so explain on the select statement shows a bit better than 50% reduction in predictedwork for that part. And I will go ahead and drop all indexes on the target table.) -- Scott Ribe scott_ribe@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice
Scott Ribe <scott_ribe@elevated-dev.com> writes: > Is there any way to get some insight into the progress of: > insert into foo select distinct on (...) from bar where... Watching the physical size of the foo table might be close enough. Or you could run contrib/pgstattuple's pgstattuple() function every so often --- it will report the uncommitted tuples as "dead", which is inaccurate, but you'd be able to see how fast the number is increasing. regards, tom lane
On Dec 27, 2012, at 12:46 PM, Tom Lane wrote: > Or you could run contrib/pgstattuple's pgstattuple() function every so > often --- it will report the uncommitted tuples as "dead", which is > inaccurate, but you'd be able to see how fast the number is increasing. That's exactly the kind of thing I was hoping for. I'm actually inserting into an empty table, so "dead" tuples would bedead accurate in my case ;-) Or I could suck it up and do them in batches instead of one giant pass... -- Scott Ribe scott_ribe@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice