Обсуждение: [HACKERS] unique index violation after pg_upgrade to PG10

Поиск
Список
Период
Сортировка

[HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
I upgrade another instance to PG10 yesterday and this AM found unique key
violations.

Our application is SELECTing FROM sites WHERE site_location=$1, and if it
doesn't find one, INSERTs one (I know that's racy and not ideal).  We ended up
with duplicate sites, despite a unique index.  We removed the duplicate rows
and reindexed fine.  This is just a heads up with all the detail I can fit in a
mail (but there's more if needed).

ts=# \d sitessite_id          | integer  |           | not null | nextval('sites_site_id_seq'::regclass)site_office
| text     |           |          | site_location    | text     |           |          | 
 
[...]
Indexes:   "sites_pkey" PRIMARY KEY, btree (site_id)   "sites_idx" UNIQUE, btree (site_office, site_location)

ts=# SELECT site_office, site_location, count(*), min(site_id), max(site_id) FROM sites GROUP BY 1,2 HAVING count(*)>1
ORDERBY 1,2;   site_office   | site_location | count | min | max 
 
----------------+---------------+-------+-----+-----CRCLMT-DOEMS0  |               |     2 | 165 | 351CRCLMT-DOEMS0  |
1101         |     2 | 123 | 343CRCLMT-DOEMS0  | 1102          |     2 | 134 | 318CRCLMT-DOEMS0  | 1103          |
2| 145 | 322CRCLMT-DOEMS0  | 1104          |     2 | 156 | 329
 

The duplicate site_ids mean this isn't an issue with row version/visibility due
to XIDs (right?).

ts=# SELECT 1 FROM sites WHERE site_office='CRCLMT-CEMS0' AND site_location='';
(0 rows)
ts=# SET enable_bitmapscan=off; SET enable_indexscan=off; SELECT 1 FROM sites WHERE site_office='CRCLMT-CEMS0' AND
site_location='';
-[ RECORD 1 ]
?column? | 1

So there's an issue with indices failing to return matching rows (and thereby
allowing inserting duplicate violating rows).

That's the only table/index affected (and the only PG instance thus far
affected).

Note regarding my pg_upgrade: 3 years ago, this was our first and smallest
customer who I upgraded off PG8.4 (to PG9.2 if I recall), and I did it using
pg_dump |pg_restore.  I believe, as a consequence, its postgres database was in
"C" locale and ASCII encoding.  So the last few upgrades (9.3, .4 .5 and .6), I
believe I've manually used initdb --encoding followed by pg_upgrade (else it
fails due to new postgres/template DBs with different locale/encoding from
old).  This upgrade, I finally renamed postgres DB (which has imported CSV logs
and one or two other things) and pg_dump|pg_restore into a new DB with UTF8
encoding, which allowed pg_upgrade to run without special initdb invocation.

I have an LVM snapshot and full CSV logs imported into a table.  I also have a
backup from 22:00 which doesn't have duplicate sites.  Those seem to have been
inserted by our application around 00:30:

These IDs which inserted duplicate rows:
postgres=# SELECT session_id, max(session_line) FROM postgres_log_2017_10_24_0000 WHERE message LIKE 'statement: SELECT
site_idFROM sites WHERE%' GROUP BY 1 ; session_id   | max  
 
---------------+------59eedfb1.5cea |  71459eedfb5.5cf1 | 1741
(2 rows)

postgres=# SELECT log_time, session_id, session_line, left(message,333) FROM postgres_log WHERE
(session_id='59eedfb1.5cea'OR session_id='59eedfb5.5cf1') AND (session_line<6 OR message LIKE '%INSERT INTO site%')
ORDERBY 1,2,3;
 
-[ RECORD 4
]+----------------------------------------------------------------------------------------------------------------------------------------------
log_time     | 2017-10-24 00:37:37.888-06
session_id   | 59eedfb1.5cea
session_line | 4
left         | statement: SELECT site_id FROM sites WHERE
                                   +            |                 site_office = 'CRCLMT-DOEMS0' AND site_location =
'1203'
-[ RECORD 5
]+----------------------------------------------------------------------------------------------------------------------------------------------
log_time     | 2017-10-24 00:37:37.89-06
session_id   | 59eedfb1.5cea
session_line | 5
left         | statement: INSERT INTO sites (site_office,site_location,site_alias)
                                   +            |                         VALUES ('CRCLMT-DOEMS0', '1203', (SELECT
site_idFROM sites                                                          +            |                         WHERE
site_office= 'CRCLMT-CEMS0' AND site_location = '1203'))
 

Note:
I run a script which does various combinations of ANALYZE/VACUUM (FULL/ANALYZE)
following the upgrade, and a script runs nightly with REINDEX and pg_repack
(and a couple of CLUSTER), so you should assume that any combination of those
maintenance commands have been run.

In our reindex/repack log I found the first error due to duplicates:
Tue Oct 24 01:27:53 MDT 2017: sites: sites_idx(repack non-partitioned)...
WARNING: Error creating index "public"."index_61764": ERROR:  could not create unique index "index_61764"
DETAIL:  Key (site_office, site_location)=(CRCLMT-DOEMS0, 1120) is duplicated.
WARNING: Skipping index swapping for "sites", since no new indexes built
WARNING: repack failed for "sites_idx"
reindex: warning, dropping invalid/unswapped index: index_61764

postgres=# SELECT * FROM postgres_log WHERE session_id='59eeeb79.4bd5' AND error_severity='ERROR' ORDER BY 1 DESC LIMIT
33;
log_time               | 2017-10-24 01:27:53.545-06
user_name              | postgres
database               | ts
pid                    | 19413
connection_from        | [local]
session_id             | 59eeeb79.4bd5
session_line           | 10
command_tag            | CREATE INDEX
session_start_time     | 2017-10-24 01:27:53-06
error_severity         | ERROR
message                | could not create unique index "index_61764"
detail                 | Key (site_office, site_location)=(CRCLMT-DOEMS0, 1120) is duplicated.
query                  | CREATE UNIQUE INDEX CONCURRENTLY index_61764 ON sites USING btree (site_office, site_location)
TABLESPACEpg_default
 
application_name       | pg_repack

Disclosure: I see a storage error from 2 days ago (pre-upgrade).  This is a
4-drive RAID5 with spare.  I can't see that we're missing a spare, so I don't
know what else to do besides consider this message to be spurious..

Oct 22 21:28:16 midrivers-ubuntu kernel: [1429369.825577] hpsa 0000:05:00.0: scsi 4:0:0:0 Aborting command
ffff8f84f2ba7c00Tag:0x00000000:000001f0CDBLen: 6 CDB: 0x1201... SN: 0x0  BEING SENT
 
Oct 22 21:28:16 midrivers-ubuntu kernel: [1429369.825583] hpsa 0000:05:00.0: scsi 4:0:0:0: Aborting command RAID
     HP       P410i            controller SSDSmartPathCap- En- Exp=1
 
Oct 22 21:28:16 midrivers-ubuntu kernel: [1429369.898930] hpsa 0000:05:00.0: CDB 12018000600000000000000000000000 was
abortedwith status 0x0
 
Oct 22 21:28:17 midrivers-ubuntu kernel: [1429371.269963] hpsa 0000:05:00.0: invalid command: LUN:0000000000000000
CDB:00000000f00100000000000000000000
Oct 22 21:28:17 midrivers-ubuntu kernel: [1429371.269966] hpsa 0000:05:00.0: probably means device no longer present
Oct 22 21:28:17 midrivers-ubuntu kernel: [1429371.269968] hpsa 0000:05:00.0: scsi 4:0:0:0 Aborting command
ffff8f84f2ba7c00Tag:0x00000000:000001f0CDBLen: 6 CDB: 0x1201... SN: 0x0  SENT, FAILED
 
Oct 22 21:28:17 midrivers-ubuntu kernel: [1429371.269974] hpsa 0000:05:00.0: scsi 4:0:0:0: FAILED to abort command RAID
            HP       P410i            controller SSDSmartPathCap- En- Exp=1
 

sites is perhaps our most central table and I would expect issues there to be
quickly apparent (however this was at least initially a silent failure).public | sites | table | telsasoft | 80 kB | 

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Kenneth Marshall
Дата:
On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> I upgrade another instance to PG10 yesterday and this AM found unique key
> violations.
> 
> Our application is SELECTing FROM sites WHERE site_location=$1, and if it
> doesn't find one, INSERTs one (I know that's racy and not ideal).  We ended up
> with duplicate sites, despite a unique index.  We removed the duplicate rows
> and reindexed fine.  This is just a heads up with all the detail I can fit in a
> mail (but there's more if needed).
> ...
> Note:
> I run a script which does various combinations of ANALYZE/VACUUM (FULL/ANALYZE)
> following the upgrade, and a script runs nightly with REINDEX and pg_repack
> (and a couple of CLUSTER), so you should assume that any combination of those
> maintenance commands have been run.
> 
> In our reindex/repack log I found the first error due to duplicates:
> Tue Oct 24 01:27:53 MDT 2017: sites: sites_idx(repack non-partitioned)...
> WARNING: Error creating index "public"."index_61764": ERROR:  could not create unique index "index_61764"
> DETAIL:  Key (site_office, site_location)=(CRCLMT-DOEMS0, 1120) is duplicated.
> WARNING: Skipping index swapping for "sites", since no new indexes built
> WARNING: repack failed for "sites_idx"
> reindex: warning, dropping invalid/unswapped index: index_61764
> 

Hi Justin,

This sounds like a pg_repack bug and not a PostgreSQL bug. What version are
you running?

Regards,
Ken


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:

> > Note:
> > I run a script which does various combinations of ANALYZE/VACUUM (FULL/ANALYZE)
> > following the upgrade, and a script runs nightly with REINDEX and pg_repack
> > (and a couple of CLUSTER), so you should assume that any combination of those
> > maintenance commands have been run.
> > 
> > In our reindex/repack log I found the first error due to duplicates:
> > Tue Oct 24 01:27:53 MDT 2017: sites: sites_idx(repack non-partitioned)...
> > WARNING: Error creating index "public"."index_61764": ERROR:  could not create unique index "index_61764"
> > DETAIL:  Key (site_office, site_location)=(CRCLMT-DOEMS0, 1120) is duplicated.
> > WARNING: Skipping index swapping for "sites", since no new indexes built
> > WARNING: repack failed for "sites_idx"
> > reindex: warning, dropping invalid/unswapped index: index_61764
> > 
> 
> Hi Justin,
> 
> This sounds like a pg_repack bug and not a PostgreSQL bug. What version are
> you running?

Really ?  pg_repack "found" and was victim to the duplicate keys, and rolled
back its work.  The CSV logs clearly show that our application INSERTed rows
which are duplicates.

[pryzbyj@database ~]$ rpm -qa pg_repack10
pg_repack10-1.4.2-1.rhel6.x86_64

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Kenneth Marshall
Дата:
On Tue, Oct 24, 2017 at 01:30:19PM -0500, Justin Pryzby wrote:
> On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> > On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> 
> > > Note:
> > > I run a script which does various combinations of ANALYZE/VACUUM (FULL/ANALYZE)
> > > following the upgrade, and a script runs nightly with REINDEX and pg_repack
> > > (and a couple of CLUSTER), so you should assume that any combination of those
> > > maintenance commands have been run.
> > > 
> > > In our reindex/repack log I found the first error due to duplicates:
> > > Tue Oct 24 01:27:53 MDT 2017: sites: sites_idx(repack non-partitioned)...
> > > WARNING: Error creating index "public"."index_61764": ERROR:  could not create unique index "index_61764"
> > > DETAIL:  Key (site_office, site_location)=(CRCLMT-DOEMS0, 1120) is duplicated.
> > > WARNING: Skipping index swapping for "sites", since no new indexes built
> > > WARNING: repack failed for "sites_idx"
> > > reindex: warning, dropping invalid/unswapped index: index_61764
> > > 
> > 
> > Hi Justin,
> > 
> > This sounds like a pg_repack bug and not a PostgreSQL bug. What version are
> > you running?
> 
> Really ?  pg_repack "found" and was victim to the duplicate keys, and rolled
> back its work.  The CSV logs clearly show that our application INSERTed rows
> which are duplicates.
> 
> [pryzbyj@database ~]$ rpm -qa pg_repack10
> pg_repack10-1.4.2-1.rhel6.x86_64
> 
> Justin

Hi Justin,

I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
the exact same error. It seemed to caused by a tuple visibility issue that
allowed the "working" unique index to be built, even though a duplicate row
existed. Then the next pg_repack would fail with the error you got. In our
case I needed the locality of reference to keep the DB performance acceptable
and it was not a critical issue if there was a duplicate. We would remove the
duplicates if we had a failure. We never had a problem with the NO pg_repack
scenarios.

Regards,
Ken 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Peter Geoghegan
Дата:
On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall <ktm@rice.edu> wrote:
>> Really ?  pg_repack "found" and was victim to the duplicate keys, and rolled
>> back its work.  The CSV logs clearly show that our application INSERTed rows
>> which are duplicates.
>>
>> [pryzbyj@database ~]$ rpm -qa pg_repack10
>> pg_repack10-1.4.2-1.rhel6.x86_64
>>
>> Justin
>
> Hi Justin,
>
> I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
> the exact same error. It seemed to caused by a tuple visibility issue that
> allowed the "working" unique index to be built, even though a duplicate row
> existed. Then the next pg_repack would fail with the error you got. In our
> case I needed the locality of reference to keep the DB performance acceptable
> and it was not a critical issue if there was a duplicate. We would remove the
> duplicates if we had a failure. We never had a problem with the NO pg_repack
> scenarios.

A new, enhanced version of the corruption detection tool amcheck is
now available, and has both apt + yum packages available:

https://github.com/petergeoghegan/amcheck

Unlike the version in Postgres 10, this enhanced version (V1.2) has
"heapallindexed" verification, which is really what you want here. If
you install it, and run it against the unique index in question (with
"heapallindexed" verification), that could help. It might provide a
more useful diagnostic message.

This is very new, so do let me know how you get on if you try it out.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
On Tue, Oct 24, 2017 at 12:31:49PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall <ktm@rice.edu> wrote:
> >> Really ?  pg_repack "found" and was victim to the duplicate keys, and rolled
> >> back its work.  The CSV logs clearly show that our application INSERTed rows
> >> which are duplicates.
> >>
> >> [pryzbyj@database ~]$ rpm -qa pg_repack10
> >> pg_repack10-1.4.2-1.rhel6.x86_64
> >>
> >> Justin
> >
> > Hi Justin,
> >
> > I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
> > the exact same error. It seemed to caused by a tuple visibility issue that
> > allowed the "working" unique index to be built, even though a duplicate row
> > existed. Then the next pg_repack would fail with the error you got. In our
> > case I needed the locality of reference to keep the DB performance acceptable
> > and it was not a critical issue if there was a duplicate. We would remove the
> > duplicates if we had a failure. We never had a problem with the NO pg_repack
> > scenarios.
> 
> A new, enhanced version of the corruption detection tool amcheck is
> now available, and has both apt + yum packages available:
> 
> https://github.com/petergeoghegan/amcheck
> 
> Unlike the version in Postgres 10, this enhanced version (V1.2) has
> "heapallindexed" verification, which is really what you want here. If
> you install it, and run it against the unique index in question (with
> "heapallindexed" verification), that could help. It might provide a
> more useful diagnostic message.
> 
> This is very new, so do let me know how you get on if you try it out.

I started an instance connected to a copy of the LVM snapshot I saved:
[pryzbyj@database ~]$ sudo -u postgres /usr/pgsql-10/bin/postmaster -c port=5678 -D /mnt/10/data

[pryzbyj@database amcheck]$ psql --port 5678 ts -c "SELECT bt_index_check('sites_idx'::regclass::oid,
heapallindexed=>True)"
ERROR:  high key invariant violated for index "sites_idx"
DETAIL:  Index tid=(1,37) points to heap tid=(0,97) page lsn=0/0.
[pryzbyj@database amcheck]$ psql --port 5678 ts -c "SELECT bt_index_parent_check('sites_idx'::regclass::oid,
heapallindexed=>True)"
ERROR:  high key invariant violated for index "sites_idx"
DETAIL:  Index tid=(1,37) points to heap tid=(0,97) page lsn=0/0.

ts=# SELECT * FROM page_header(get_raw_page('sites_idx', 1));
lsn       | 0/0
checksum  | 0
flags     | 0
lower     | 872
upper     | 1696
special   | 8176
pagesize  | 8192
version   | 4
prune_xid | 0

ts=# SELECT * FROM page_header(get_raw_page('sites', 0));
lsn       | 1FB/AC5A4908
checksum  | 0
flags     | 5
lower     | 436
upper     | 464
special   | 8192
pagesize  | 8192
version   | 4
prune_xid | 0

ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1));

itemoffset | 48
ctid       | (1,37)
itemlen    | 32
nulls      | f
vars       | t
data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 0b 31 31 31 31 00 00 00 00 00 00

itemoffset | 37
ctid       | (0,97)
itemlen    | 24
nulls      | f
vars       | t
data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 03 00 00

..which I gather just verifies that the index is corrupt, not sure if there's
anything else to do with it?  Note, we've already removed the duplicate rows.

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Peter Geoghegan
Дата:
On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:
> ..which I gather just verifies that the index is corrupt, not sure if there's
> anything else to do with it?  Note, we've already removed the duplicate rows.

Yes, the index itself is definitely corrupt -- this failed before the
new "heapallindexed" check even started. Though it looks like that
would have failed too, if you got that far, since the index points to
a row that does not contain the same data. (I only know this because
you dumped the heap tuple and the index tuple.)

Maybe you could try verifying a different index on the same table with
"heapallindexed", too. Perhaps that would fail in a more interesting
way.

I don't know how pg_repack works in any detail, but I have a hard time
imagining it causing corruption like this, where a single B-Tree page
is corrupt (high key invariant fails), possibly because of a torn page
(possibly due to recovery not replaying all the WAL needed, for
whatever reason).

You're using LVM snapshots -- I hope that you're aware that they're
not guaranteed to be consistent across logical volumes. There are a
few different ways that they could cause corruption like this if you
weren't careful. (In general, I wouldn't recommend using LVM snapshots
as any kind of backup solution.)

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
On Tue, Oct 24, 2017 at 01:48:55PM -0500, Kenneth Marshall wrote:
> I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
> the exact same error. It seemed to caused by a tuple visibility issue that
> allowed the "working" unique index to be built, even though a duplicate row
> existed. Then the next pg_repack would fail with the error you got.

FTR, I was able to run the repack script several times without issue, hitting
that table each time.

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if there's
> > anything else to do with it?  Note, we've already removed the duplicate rows.
> Maybe you could try verifying a different index on the same table with
> "heapallindexed", too. Perhaps that would fail in a more interesting
> way.

The only other index seems fine:

[pryzbyj@database ~]$ psql --port 5678 ts -txc "SELECT bt_index_parent_check('sites_pkey'::regclass::oid,
heapallindexed=>True)"
bt_index_parent_check | 

[pryzbyj@database ~]$ psql --port 5678 ts -txc "SELECT bt_index_check('sites_pkey'::regclass::oid,
heapallindexed=>True)"
bt_index_check | 

> You're using LVM snapshots -- I hope that you're aware that they're
> not guaranteed to be consistent across logical volumes. There are a
> few different ways that they could cause corruption like this if you
> weren't careful. (In general, I wouldn't recommend using LVM snapshots
> as any kind of backup solution.)

Right, I asked a coworker to create the snapshot before trying to fix the
immediate problem, as a forensic measure.  We have postgres on just one LVM LV.

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Justin Pryzby
Дата:
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if there's
> > anything else to do with it?  Note, we've already removed the duplicate rows.
> 
> Yes, the index itself is definitely corrupt -- this failed before the
> new "heapallindexed" check even started. Though it looks like that
> would have failed too, if you got that far, since the index points to
> a row that does not contain the same data. (I only know this because
> you dumped the heap tuple and the index tuple.)

I think you must have compared these:

On Tue, Oct 24, 2017 at 03:11:44PM -0500, Justin Pryzby wrote:
> ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1));
> 
> itemoffset | 48
> ctid       | (1,37)
> itemlen    | 32
> nulls      | f
> vars       | t
> data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 0b 31 31 31 31 00 00 00 00 00 00
...
> itemoffset | 37
> ctid       | (0,97)
> itemlen    | 24
> nulls      | f
> vars       | t
> data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 03 00 00

..but note those are both items in sites_idx (48 and 37, which I seem to have
pasted out of order)..  I included both because I'm not confident I know what
the "index tid=(1,37)" referred to, but I gather it means item at offset=37
(and not item with ctid=(1,37).)

| [pryzbyj@database amcheck]$ psql --port 5678 ts -c "SELECT bt_index_check('sites_idx'::regclass::oid,
heapallindexed=>True)"
| ERROR:  high key invariant violated for index "sites_idx"
| DETAIL:  Index tid=(1,37) points to heap tid=(0,97) page lsn=0/0.

ts=# SELECT * FROM page_header(get_raw_page('sites_idx', 1));lsn | checksum | flags | lower | upper | special |
pagesize| version | prune_xid 
 
-----+----------+-------+-------+-------+---------+----------+---------+-----------0/0 |        0 |     0 |   872 |
1696|    8176 |     8192 |       4 |         0
 

Here is its heap page:

ts=# SELECT * FROM heap_page_items(get_raw_page('sites', 0)) WHERE lp=97;lp | lp_off | lp_flags | lp_len | t_xmin |
t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff |      t_bits      | t_oid |                   t_data
              
 

----+--------+----------+--------+--------+----------+----------+--------+-------------+------------+--------+------------------+-------+--------------------------------------------97
|   968 |        1 |     52 |  21269 | 33567444 |        0 | (3,27) |        8204 |       2307 |     32 |
1110100000010000|       | \x700000001b4352434c4d542d43454d5330030303
 

Which I see ends with 0303 vs 0000..

t_infomask=2307=2048+256+3 =>
#define HEAP_HASNULL                    0x0001  /* has null attribute(s) */
#define HEAP_HASVARWIDTH                0x0002  /* has variable-width attribute(s) */
#define HEAP_XMIN_COMMITTED             0x0100  /* t_xmin committed */
#define HEAP_XMAX_INVALID               0x0800  /* t_xmax invalid/aborted */

t_infomask2=8204 => 8192+12 =>
#define HEAP_KEYS_UPDATED               0x2000  /* tuple was updated and key cols modified, or tuple deleted */

Maybe this is relevant ?
ts=# SELECT * FROM heap_page_items(get_raw_page('sites', 3)) WHERE lp=27;lp | lp_off | lp_flags | lp_len | t_xmin |
t_xmax| t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_data 
 

----+--------+----------+--------+--------+--------+----------+--------+-------------+------------+--------+--------+-------+--------27
|     0 |        0 |      0 |        |        |          |        |             |            |        |        |
|
 

Justin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] unique index violation after pg_upgrade to PG10

От
Peter Geoghegan
Дата:
On Tue, Oct 24, 2017 at 10:20 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:
> I think you must have compared these:

Yes, I did. My mistake.

> On Tue, Oct 24, 2017 at 03:11:44PM -0500, Justin Pryzby wrote:
>> ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1));
>>
>> itemoffset | 48
>> ctid       | (1,37)
>> itemlen    | 32
>> nulls      | f
>> vars       | t
>> data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 0b 31 31 31 31 00 00 00 00 00 00
> ...
>> itemoffset | 37
>> ctid       | (0,97)
>> itemlen    | 24
>> nulls      | f
>> vars       | t
>> data       | 1b 43 52 43 4c 4d 54 2d 43 45 4d 53 30 03 00 00
>
> ..but note those are both items in sites_idx (48 and 37, which I seem to have
> pasted out of order)..  I included both because I'm not confident I know what
> the "index tid=(1,37)" referred to, but I gather it means item at offset=37
> (and not item with ctid=(1,37).)
>
> | [pryzbyj@database amcheck]$ psql --port 5678 ts -c "SELECT bt_index_check('sites_idx'::regclass::oid,
heapallindexed=>True)"
> | ERROR:  high key invariant violated for index "sites_idx"
> | DETAIL:  Index tid=(1,37) points to heap tid=(0,97) page lsn=0/0.

This means that the item at (1,37) in the index is not <= the high
key, which is located at (1,1). (The high key comes first, despite
being an upper bound rather than a lower bound, per pageinspect docs.)

I find it suspicious that the page lsn is 0 here, btw.

> ts=# SELECT * FROM page_header(get_raw_page('sites_idx', 1));
>  lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid
> -----+----------+-------+-------+-------+---------+----------+---------+-----------
>  0/0 |        0 |     0 |   872 |  1696 |    8176 |     8192 |       4 |         0
>
> Here is its heap page:
>
> ts=# SELECT * FROM heap_page_items(get_raw_page('sites', 0)) WHERE lp=97;
>  lp | lp_off | lp_flags | lp_len | t_xmin |  t_xmax  | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff |
t_bits     | t_oid |                   t_data
 
>
----+--------+----------+--------+--------+----------+----------+--------+-------------+------------+--------+------------------+-------+--------------------------------------------
>  97 |    968 |        1 |     52 |  21269 | 33567444 |        0 | (3,27) |        8204 |       2307 |     32 |
1110100000010000|       | \x700000001b4352434c4d542d43454d5330030303
 
>
> Which I see ends with 0303 vs 0000..

Looks like I was accidentally right, then -- the heap and index do differ.

You might try this tool I published recently, to get a better sense of
details like this:

https://github.com/petergeoghegan/pg_hexedit

(Don't do so with a data directory that you cannot afford to corrupt
again, though!)

> Maybe this is relevant ?
> ts=# SELECT * FROM heap_page_items(get_raw_page('sites', 3)) WHERE lp=27;
>  lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits |
t_oid| t_data
 
>
----+--------+----------+--------+--------+--------+----------+--------+-------------+------------+--------+--------+-------+--------
>  27 |      0 |        0 |      0 |        |        |          |        |             |            |        |        |
     |
 

This looks like an LP_DEAD item.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers