On Thu, Oct 5, 2017 at 10:39 AM, Wood, Dan <hexpert@amazon.com> wrote:
> Whatever you do make sure to also test 250 clients running lock.sql. Even with the communities fix plus YiWen’s fix
Ican still get duplicate rows. What works for “in-block” hot chains may not work when spanning blocks.
Interesting. Which version did you test? Only 9.6?
> Once nearly all 250 clients have done their updates and everybody is waiting to vacuum which one by one will take a
whileI usually just “pkill -9 psql”. After that I have many of duplicate “id=3” rows. On top of that I think we might
havea lock leak. After the pkill I tried to rerun setup.sql to drop/create the table and it hangs. I see an
autovacuumprocess starting and existing every couple of seconds. Only by killing and restarting PG can I drop the
table.
Yeah, that's more or less what I have been doing. My tests involve
using your initial script with way more sessions triggering lock.sql,
minus the kill-9 portion (good idea actually). I can of course see the
sessions queuing for VACUUM, still I cannot see duplicated rows, even
if I headshot Postgres in the middle of the VACUUM waiting queue. Note
that I have just tested Alvaro's patch on 9.3.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers