Обсуждение: Old small commitfest items
Andres talked about us concentrating on old items and very small items. Here's a list of items that are both old and small (FSVO "small"): The first number is the CF item number, the second the patch line count: 528 1146 Fix the optimization to skip WAL-logging on table created in same transaction 669 847 pgbench - allow to store query results into variables 713 346 Correct space parsing in to_timestamp() 922 180 Failure at replay for corrupted 2PC files + reduce window between end-of-recovery record and history file write 931 1527 Protect syscache from bloating with negative cache entries 962 553 new plpgsql extra_checks 990 248 add GUCs to control custom plan logic 1001 851 Convert join OR clauses into UNION queries 1004 1159 SERIALIZABLE with parallel query 1085 922 XML XPath default namespace support 1113 68 Replication status in logical replication 1138 733 Improve compactify_tuples and PageRepairFragmentation 1141 1851 Full merge join on comparison clause 1166 1162 Fix LWLock degradation on NUMA The first item on the list is just plain embarrassing. It's a bug fix item that we've been punting for 10 CFs. If people want a priority list for items to attack during the CF this list is probably a good place to start. cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, Jul 02, 2018 at 10:30:11AM -0400, Andrew Dunstan wrote: > 528 1146 Fix the optimization to skip WAL-logging on table created in > same transaction This has been around for an astonishing amount of time... I don't recall all the details but rewriting most of the relation sync handling around heapam for a corner optimization is no fun. There is no way that we could go down to elimitate wal_level = minimal, so I am wondering if we should not silently ignore the optimization if possible instead of throwing an error. Perhaps logging a WARNING could make sense. > 669 847 pgbench - allow to store query results into variables I think that I could look into this one as well. > 922 180 Failure at replay for corrupted 2PC files + reduce window > between end-of-recovery record and history file write I know this one pretty well :), waiting for reviews, and the patches are not complicated. > 1113 68 Replication status in logical replication I think that I could finish this one as well. -- Michael
Вложения
On Mon, Jul 2, 2018 at 6:30 PM, Michael Paquier <michael@paquier.xyz> wrote: > On Mon, Jul 02, 2018 at 10:30:11AM -0400, Andrew Dunstan wrote: >> 528 1146 Fix the optimization to skip WAL-logging on table created in >> same transaction > > This has been around for an astonishing amount of time... I don't > recall all the details but rewriting most of the relation sync handling > around heapam for a corner optimization is no fun. There is no way that > we could go down to elimitate wal_level = minimal, so I am wondering if > we should not silently ignore the optimization if possible instead of > throwing an error. Perhaps logging a WARNING could make sense. I don't know about any of that, but something has to give. How much more time has to pass before we admit defeat? At a certain point, that is the responsible thing to do. -- Peter Geoghegan
On Wed, Jul 04, 2018 at 06:54:05PM -0700, Peter Geoghegan wrote: > I don't know about any of that, but something has to give. How much > more time has to pass before we admit defeat? At a certain point, that > is the responsible thing to do. Well, for this one it is not really complicated to avoid the failures reported and the potential data losses if the so-said optimizations, which are actually broken, have their checks tightened a bit. So I'd rather not give up on this one if there are ways to prevent user-facing problems. -- Michael
Вложения
On Wed, Jul 4, 2018 at 7:53 PM, Michael Paquier <michael@paquier.xyz> wrote: > On Wed, Jul 04, 2018 at 06:54:05PM -0700, Peter Geoghegan wrote: >> I don't know about any of that, but something has to give. How much >> more time has to pass before we admit defeat? At a certain point, that >> is the responsible thing to do. > > Well, for this one it is not really complicated to avoid the failures > reported and the potential data losses if the so-said optimizations, > which are actually broken, have their checks tightened a bit. So I'd > rather not give up on this one if there are ways to prevent user-facing > problems. I'm not suggesting that we should give up, or that we should not give up. I think that timeboxing it is a good idea. In other words, the question "How much more time has to pass before we admit defeat?" was not a rhetorical question. As things stand, we're not doing anything, which has a cost that adds up as time goes on. Let's be realistic. If nobody is willing to do the work, then a reasonable person must eventually conclude that that's because it isn't worth doing. -- Peter Geoghegan