Обсуждение: BUG #14295: Hot standby crash during tsvector rebuild
VGhlIGZvbGxvd2luZyBidWcgaGFzIGJlZW4gbG9nZ2VkIG9uIHRoZSB3ZWJz aXRlOgoKQnVnIHJlZmVyZW5jZTogICAgICAxNDI5NQpMb2dnZWQgYnk6ICAg ICAgICAgIFNwZW5jZXIgVGhvbWFzb24KRW1haWwgYWRkcmVzczogICAgICBz cGVuY2VyQHdoaXRlc2t5Y29tbXVuaWNhdGlvbnMuY29tClBvc3RncmVTUUwg dmVyc2lvbjogOS40LjgKT3BlcmF0aW5nIHN5c3RlbTogICBTb2xhcmlzIDEx LjMgU1BBUkMKRGVzY3JpcHRpb246ICAgICAgICAKCkhlbGxvLA0KV2UgaGF2 ZSBhIGZheCBzZXJ2ZXIgYXBwbGljYXRpb24gdGhhdCBwZXJmb3JtcyBPQ1Ig YW5kIHVzZXMgZnVsbCB0ZXh0CnNlYXJjaC4gIFdlIGhhdmUgYSBuaWdodGx5 IGNyb24gam9iIHRoYXQgcHVyZ2VzIHRoZSBzdG9yZWQgdGV4dCBmb3Igb2xk CnJlY29yZHMgdG8gc2F2ZSBzcGFjZS4gIFRoZSB0YWJsZSBjb250YWlucyBh cHByb3ggNjBLIHJvd3MgYW5kIHRoZSBqb2IKZGVsZXRlcyB0aGUgdGV4dCBm cm9tIGFyb3VuZCAxMDAtMTUwIHJlY29yZHMgZWFjaCBuaWdodCBhbmQgdGhl biB1cGRhdGVzIGEKdHN2ZWN0b3IgY29sdW1uLiAgVGhpcyBkYXRhYmFzZSBp cyByZXBsaWNhdGVkIHRvIDQgaG90IHN0YW5kYnkgc2xhdmVzLCAyIGluCnRo ZSBzYW1lIGRhdGEgY2VudGVyIGFuZCAyIGluIGEgcmVtb3RlIGxvY2F0aW9u LiAgV2UgaGF2ZSBvYnNlcnZlZCBhIGZldwpjcmFzaGVzIG9mIGFsbCB0aGUg aG90IHN0YW5kYnkgc2VydmVyIHdoZW4gdGhpcyBjcm9uIGpvYiBydW5zLiAg QWxsIHJlcGxpY2FzCmNyYXNoIGF0IHRoZSBzYW1lIHRpbWUgd2l0aCB0aGUg Zm9sbG93aW5nIGxvZyBlbnRyaWVzIGJlbG93LiAgVGhlIG1hc3RlcgphcHBl YXJzIHVuYWZmZWN0ZWQgYW5kIGNvbnRpbnVlcyB0byBmdW5jdGlvbiBub3Jt YWxseS4gIFVuZm9ydHVuYXRlbHkgSSdtCnVuYWJsZSB0byByZXByb2R1Y2Ug YXQgd2lsbCBidXQgSSBoYXZlIG9ic2VydmVkIHRoaXMgd2l0aCA5LjUuMyB0 aGVuIHdlCmRvd25ncmFkZWQgdG8gOS40LjggYW5kIHRoZSBjcmFzaCBqdXN0 IGhhcHBlbmVkIGFnYWluLg0KDQpMb2dzIGZyb20gb25lIG9mIHRoZSByZXBs aWNhcyBpcyBiZWxvdzoNCjIwMTYtMDgtMjYgMDY6MDE6NTAgVVRDIEZBVEFM OiAgdW5leHBlY3RlZCBHSU4gbGVhZiBhY3Rpb246IDANCjIwMTYtMDgtMjYg MDY6MDE6NTAgVVRDIENPTlRFWFQ6ICB4bG9nIHJlZG8gSW5zZXJ0IGl0ZW0s IG5vZGU6CjE2NjMvMTYzODcvMzMxMDggYmxrbm86IDY2MjIgaXNkYXRhOiBU IGlzbGVhZjogVCAzIHNlZ21lbnRzOiAyIChhZGQgMCBpdGVtcykKMCB1bmtu b3duIGFjdGlvbiAwID8/Pw0KMjAxNi0wOC0yNiAwNjowMTo1MCBVVEMgTE9H OiAgc3RhcnR1cCBwcm9jZXNzIChQSUQgMTk1OTMpIGV4aXRlZCB3aXRoIGV4 aXQKY29kZSAxDQoyMDE2LTA4LTI2IDA2OjAxOjUwIFVUQyBMT0c6ICB0ZXJt aW5hdGluZyBhbnkgb3RoZXIgYWN0aXZlIHNlcnZlcgpwcm9jZXNzZXMNCjIw MTYtMDgtMjYgMDY6MDE6NTAgVVRDIFdBUk5JTkc6ICB0ZXJtaW5hdGluZyBj b25uZWN0aW9uIGJlY2F1c2Ugb2YgY3Jhc2ggb2YKYW5vdGhlciBzZXJ2ZXIg cHJvY2Vzcw0KDQpUaGUgdGFibGUgc2NoZW1hIGlzOg0KICAgIENvbHVtbiAg ICB8ICAgICAgICAgICBUeXBlICAgICAgICAgICB8IE1vZGlmaWVycyANCi0t LS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0t LS0tLS0tDQogaWQgICAgICAgICAgIHwgdXVpZCAgICAgICAgICAgICAgICAg ICAgIHwgbm90IG51bGwNCiBmYXhfZGlkX2lkICAgfCBjaGFyYWN0ZXIgdmFy eWluZyg2NCkgICAgfCBub3QgbnVsbA0KIGZpbGVfcGF0aCAgICB8IGNoYXJh Y3RlciB2YXJ5aW5nKDI1NSkgICB8IA0KIHRpbWVfc3RhbXAgICB8IHRpbWVz dGFtcCB3aXRoIHRpbWUgem9uZSB8IG5vdCBudWxsDQogcmVtb3RlX2lkICAg IHwgY2hhcmFjdGVyIHZhcnlpbmcoNjApICAgIHwgDQogY2FsbGVyX2lkICAg IHwgY2hhcmFjdGVyIHZhcnlpbmcoNDApICAgIHwgDQogbnVtX3BhZ2VzICAg IHwgaW50ZWdlciAgICAgICAgICAgICAgICAgIHwgDQogb2NyX3RleHQgICAg IHwgdGV4dCAgICAgICAgICAgICAgICAgICAgIHwgDQogc2VhcmNoX2luZGV4 IHwgdHN2ZWN0b3IgICAgICAgICAgICAgICAgIHwgDQogYWNjb3VudF9pZCAg IHwgaW50ZWdlciAgICAgICAgICAgICAgICAgIHwgDQpJbmRleGVzOg0KICAg ICJmYXhfcnhmYXhfcGtleSIgUFJJTUFSWSBLRVksIGJ0cmVlIChpZCkNCiAg ICAiZmF4X3J4ZmF4X2FjY291bnRfaWQiIGJ0cmVlIChhY2NvdW50X2lkKQ0K ICAgICJmYXhfcnhmYXhfZmF4X2RpZF9pZCIgYnRyZWUgKGZheF9kaWRfaWQp DQogICAgImZheF9yeGZheF9mYXhfZGlkX2lkX2xpa2UiIGJ0cmVlIChmYXhf ZGlkX2lkIHZhcmNoYXJfcGF0dGVybl9vcHMpDQogICAgImZheF9yeGZheF9z ZWFyY2hfaW5kZXgiIGdpbiAoc2VhcmNoX2luZGV4KQ0KRm9yZWlnbi1rZXkg Y29uc3RyYWludHM6DQogICAgImZheF9yeGZheF9hY2NvdW50X2lkX2ZrZXki IEZPUkVJR04gS0VZIChhY2NvdW50X2lkKSBSRUZFUkVOQ0VTCmFjY291bnRz KGlkKSBERUZFUlJBQkxFIElOSVRJQUxMWSBERUZFUlJFRA0KICAgICJmYXhf cnhmYXhfZmF4X2RpZF9pZF9ma2V5IiBGT1JFSUdOIEtFWSAoZmF4X2RpZF9p ZCkgUkVGRVJFTkNFUwpkaWRzKG51bWJlcikgREVGRVJSQUJMRSBJTklUSUFM TFkgREVGRVJSRUQNCg0KUGxlYXNlIGxldCBtZSBrbm93IGlmIHRoZXJlIGlz IGFueSBvdGhlciBpbmZvIEkgY2FuIHByb3ZpZGUuDQoNClRoYW5rcywNClNw ZW5jZXINCgoK
spencer@whiteskycommunications.com writes: > Logs from one of the replicas is below: > 2016-08-26 06:01:50 UTC FATAL: unexpected GIN leaf action: 0 > 2016-08-26 06:01:50 UTC CONTEXT: xlog redo Insert item, node: > 1663/16387/33108 blkno: 6622 isdata: T isleaf: T 3 segments: 2 (add 0 items) > 0 unknown action 0 ??? Hmm, we have seen a couple of reports of that recently but have not been able to track it down. Can you provide more details about what you're doing that triggers it? Maybe even a self-contained test case? It doesn't have to be one that fails every time, as long as it'll fail occasionally. regards, tom lane
Hi Tom, I'm working on labbing this up and hopefully I can replicate it outside of = our production environment. We have a text column that contains fair amount of text (e.g. generated fro= m a fax of maybe 10-25 pages) and then a tsvector column of that text with = a gin index. To improve performance, we delete text on the old records nig= htly. This appears to be related to number of records updated and the size of the= update to the tsvector column. Hopefully I can provide more details and a= test case soon. Thanks, Spencer > On Aug 26, 2016, at 4:05 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >=20 > spencer@whiteskycommunications.com writes: >> Logs from one of the replicas is below: >> 2016-08-26 06:01:50 UTC FATAL: unexpected GIN leaf action: 0 >> 2016-08-26 06:01:50 UTC CONTEXT: xlog redo Insert item, node: >> 1663/16387/33108 blkno: 6622 isdata: T isleaf: T 3 segments: 2 (add 0 it= ems) >> 0 unknown action 0 ??? >=20 > Hmm, we have seen a couple of reports of that recently but have not been > able to track it down. Can you provide more details about what you're > doing that triggers it? Maybe even a self-contained test case? It > doesn't have to be one that fails every time, as long as it'll fail > occasionally. >=20 > regards, tom lane
Spencer Thomason <spencer@whiteskycommunications.com> writes: > I have a python script which simulates our production environment and so far I have been able to replicate the failureon demand. What is the best way to make this available? If it's not too large, posting it to this list would be great (we like to have archived documentation about bugs). If it is big, or you would rather not make it public, you can send it to me off-list. regards, tom lane
Hi Tom, I have a python script which simulates our production environment and so fa= r I have been able to replicate the failure on demand. What is the best wa= y to make this available? Also, I can make this sandbox environment available for remote access if th= at would help. Thanks! Spencer > On Aug 29, 2016, at 10:32 AM, Spencer Thomason <spencer@whiteskycommunica= tions.com> wrote: >=20 > Hi Tom, > I'm working on labbing this up and hopefully I can replicate it outside o= f our production environment. >=20 > We have a text column that contains fair amount of text (e.g. generated f= rom a fax of maybe 10-25 pages) and then a tsvector column of that text wit= h a gin index. To improve performance, we delete text on the old records n= ightly. >=20 > This appears to be related to number of records updated and the size of t= he update to the tsvector column. Hopefully I can provide more details and= a test case soon. >=20 > Thanks, > Spencer >=20 >=20 >=20 >> On Aug 26, 2016, at 4:05 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>=20 >> spencer@whiteskycommunications.com writes: >>> Logs from one of the replicas is below: >>> 2016-08-26 06:01:50 UTC FATAL: unexpected GIN leaf action: 0 >>> 2016-08-26 06:01:50 UTC CONTEXT: xlog redo Insert item, node: >>> 1663/16387/33108 blkno: 6622 isdata: T isleaf: T 3 segments: 2 (add 0 i= tems) >>> 0 unknown action 0 ??? >>=20 >> Hmm, we have seen a couple of reports of that recently but have not been >> able to track it down. Can you provide more details about what you're >> doing that triggers it? Maybe even a self-contained test case? It >> doesn't have to be one that fails every time, as long as it'll fail >> occasionally. >>=20 >> regards, tom lane >=20
Please see the attachments. I have included the master and slave configs and python scripts that trigger this on our systems. The way to reproduce this are: - create a new database - setup streaming replication to 2 or more slaves - run the populate_db.py script - if slaves do not crash, run the purge_db.py script followed by the populate_db.py script again. I didn’t see failures immediately but within 15-20 mins of inserting records, the slaves crashed. Also, I should note that this is on a SPARC T2 at 1.4GHz so the single thread performance might be a factor as well.
Вложения
Spencer Thomason <spencer@whiteskycommunications.com> writes: > I have included the master and slave configs and python scripts that trigger this on our systems. Thanks for sending this. Unfortunately I've had zero success reproducing the problem so far. I had about run out of ideas as to what might explain why it fails for you and not me, when I noticed this: > Also, I should note that this is on a SPARC T2 at 1.4GHz so the single thread performance might be a factor as well. Am I right in thinking that is a 32-bit machine? That might have something to do with it (notably because of different maxalign). Is the production machine you originally saw the problem on also 32-bit? While I'm asking questions, could you send along the output of pg_config for the build you're using? regards, tom lane
I wrote: > Spencer Thomason <spencer@whiteskycommunications.com> writes: >> Also, I should note that this is on a SPARC T2 at 1.4GHz so the single thread performance might be a factor as well. > Am I right in thinking that is a 32-bit machine? That might have > something to do with it (notably because of different maxalign). Awhile later it occurred to me that SPARCs are generally big-endian, which led me to try your example on an old HPPA box, and kaboom! I've now traced it to this bit in gindatapage.c: int nmodifieditems; ... memcpy(walbufend, &seginfo->nmodifieditems, sizeof(uint16)); which of course works fine on little-endian hardware and not at all on big-endian. There might be more bugs (takes a while to run your example on that old dinosaur :-() but this one is sufficient to explain the known symptoms. regards, tom lane
SGkgVG9tLA0KVGhpcyBhIDY0IGJpdCBtYWNoaW5lIGFuZCBJ4oCZbSBoYXBweSB0byBwcm92aWRl IHJlbW90ZSBhY2Nlc3MgaWYgdGhhdCBoZWxwcy4gIFJlcXVlc3RlZCBpbmZvIGJlbG93LiAgVGhh bmtzIGZvciB0aGUgZm9sbG93IHVwIQ0KDQokIHVuYW1lIC1hDQpTdW5PUyBwZy10ZXN0LTEud3Rz a3kubmV0IDUuMTEgMTEuMyBzdW40diBzcGFyYyBTVU5XLFQ1MjQwIFNvbGFyaXMNCg0KJCBwZ19j b25maWcNCkJJTkRJUiA9IC91c3IvcG9zdGdyZXNxbC85LjQvYmluDQpET0NESVIgPSAvdXNyL3No YXJlL2RvYy9wb3N0Z3Jlc3FsLWRvYy05LjQNCkhUTUxESVIgPSAvdXNyL3NoYXJlL2RvYy9wb3N0 Z3Jlc3FsLWRvYy05LjQNCklOQ0xVREVESVIgPSAvdXNyL2luY2x1ZGUvcG9zdGdyZXNxbA0KUEtH SU5DTFVERURJUiA9IC91c3IvaW5jbHVkZS9wb3N0Z3Jlc3FsDQpJTkNMVURFRElSLVNFUlZFUiA9 IC91c3IvaW5jbHVkZS9wb3N0Z3Jlc3FsL3NlcnZlcg0KTElCRElSID0gL3Vzci9saWIvc3BhcmN2 OQ0KUEtHTElCRElSID0gL3Vzci9saWIvc3BhcmN2OS9wb3N0Z3Jlc3FsDQpMT0NBTEVESVIgPSAv dXNyL3NoYXJlL2xvY2FsZQ0KTUFORElSID0gL3Vzci9zaGFyZS9wb3N0Z3Jlc3FsLzkuNC9tYW4N ClNIQVJFRElSID0gL3Vzci9zaGFyZS9wb3N0Z3Jlc3FsLzkuNA0KU1lTQ09ORkRJUiA9IC9ldGMv cG9zdGdyZXNxbC85LjQNClBHWFMgPSAvdXNyL2xpYi9zcGFyY3Y5L3Bvc3RncmVzcWwvcGd4cy9z cmMvbWFrZWZpbGVzL3BneHMubWsNCkNPTkZJR1VSRSA9ICdDQz0vb3B0L3NvbGFyaXNzdHVkaW8x Mi40L2Jpbi9jYycgJ0NGTEFHUz0tbTY0IC14TzUgLWZhc3QgLWZzaW1wbGU9MCAteGFsaWFzX2xl dmVsPWFueScgJ0xERkxBR1M9LW02NCcgJy0td2l0aC1saWJlZGl0LXByZWZlcnJlZCcgJy0td2l0 aC1wYW0nICctLXdpdGgtb3BlbnNzbCcgJy0td2l0aC1saWJ4bWwnICctLXdpdGgtbGlieHNsdCcg Jy0tbWFuZGlyPS91c3Ivc2hhcmUvcG9zdGdyZXNxbC85LjQvbWFuJyAnLS1kb2NkaXI9L3Vzci9z aGFyZS9kb2MvcG9zdGdyZXNxbC1kb2MtOS40JyAnLS1zeXNjb25mZGlyPS9ldGMvcG9zdGdyZXNx bC85LjQnICctLWRhdGFyb290ZGlyPS91c3Ivc2hhcmUvJyAnLS1kYXRhZGlyPS91c3Ivc2hhcmUv cG9zdGdyZXNxbC85LjQnICctLWJpbmRpcj0vdXNyL3Bvc3RncmVzcWwvOS40L2JpbicgJy0tbGli ZGlyPS91c3IvbGliL3NwYXJjdjkvJyAnLS1saWJleGVjZGlyPS91c3IvbGliL3Bvc3RncmVzcWwv JyAnLS1pbmNsdWRlZGlyPS91c3IvaW5jbHVkZS9wb3N0Z3Jlc3FsLycgJy0tZW5hYmxlLWR0cmFj ZScgJ0RUUkFDRUZMQUdTPS02NCcgJy0tZW5hYmxlLW5scycgJy0tZW5hYmxlLWludGVnZXItZGF0 ZXRpbWVzJyAnLS1lbmFibGUtdGhyZWFkLXNhZmV0eScgJy0tZGlzYWJsZS1ycGF0aCcgJy0td2l0 aC11dWlkPWUyZnMnICctLXdpdGgtcGdwb3J0PTU0MzInICctLXdpdGgtc3lzdGVtLXR6ZGF0YT0v dXNyL3NoYXJlL2xpYi96b25laW5mbycNCkNDID0gL29wdC9zb2xhcmlzc3R1ZGlvMTIuNC9iaW4v Y2MgLVhhDQpDUFBGTEFHUyA9IC1JL3Vzci9pbmNsdWRlL2xpYnhtbDINCkNGTEFHUyA9IC1tNjQg LXhPNSAtZmFzdCAtZnNpbXBsZT0wIC14YWxpYXNfbGV2ZWw9YW55DQpDRkxBR1NfU0wgPSAtS1BJ Qw0KTERGTEFHUyA9IC1MLi4vLi4vLi4vc3JjL2NvbW1vbiAtbTY0IC1XbCwtLWFzLW5lZWRlZA0K TERGTEFHU19FWCA9IA0KTERGTEFHU19TTCA9IA0KTElCUyA9IC1scGdjb21tb24gLWxwZ3BvcnQg LWx4c2x0IC1seG1sMiAtbHBhbSAtbHNzbCAtbGNyeXB0byAtbHogLWxlZGl0IC1sbnNsIC1sc29j a2V0IC1sbSANClZFUlNJT04gPSBQb3N0Z3JlU1FMIDkuNC44DQoNCg0KPiBPbiBTZXAgMiwgMjAx NiwgYXQgNjo0NiBQTSwgVG9tIExhbmUgPHRnbEBzc3MucGdoLnBhLnVzPiB3cm90ZToNCj4gDQo+ IFNwZW5jZXIgVGhvbWFzb24gPHNwZW5jZXJAd2hpdGVza3ljb21tdW5pY2F0aW9ucy5jb20+IHdy aXRlczoNCj4+IEkgaGF2ZSBpbmNsdWRlZCB0aGUgbWFzdGVyIGFuZCBzbGF2ZSBjb25maWdzIGFu ZCBweXRob24gc2NyaXB0cyB0aGF0IHRyaWdnZXIgdGhpcyBvbiBvdXIgc3lzdGVtcy4gIA0KPiAN Cj4gVGhhbmtzIGZvciBzZW5kaW5nIHRoaXMuICBVbmZvcnR1bmF0ZWx5IEkndmUgaGFkIHplcm8g c3VjY2VzcyByZXByb2R1Y2luZw0KPiB0aGUgcHJvYmxlbSBzbyBmYXIuICBJIGhhZCBhYm91dCBy dW4gb3V0IG9mIGlkZWFzIGFzIHRvIHdoYXQgbWlnaHQgZXhwbGFpbg0KPiB3aHkgaXQgZmFpbHMg Zm9yIHlvdSBhbmQgbm90IG1lLCB3aGVuIEkgbm90aWNlZCB0aGlzOg0KPiANCj4+IEFsc28sIEkg c2hvdWxkIG5vdGUgdGhhdCB0aGlzIGlzIG9uIGEgU1BBUkMgVDIgYXQgMS40R0h6IHNvIHRoZSBz aW5nbGUgdGhyZWFkIHBlcmZvcm1hbmNlIG1pZ2h0IGJlIGEgZmFjdG9yIGFzIHdlbGwuDQo+IA0K PiBBbSBJIHJpZ2h0IGluIHRoaW5raW5nIHRoYXQgaXMgYSAzMi1iaXQgbWFjaGluZT8gIFRoYXQg bWlnaHQgaGF2ZQ0KPiBzb21ldGhpbmcgdG8gZG8gd2l0aCBpdCAobm90YWJseSBiZWNhdXNlIG9m IGRpZmZlcmVudCBtYXhhbGlnbikuDQo+IElzIHRoZSBwcm9kdWN0aW9uIG1hY2hpbmUgeW91IG9y aWdpbmFsbHkgc2F3IHRoZSBwcm9ibGVtIG9uIGFsc28gMzItYml0Pw0KPiANCj4gV2hpbGUgSSdt IGFza2luZyBxdWVzdGlvbnMsIGNvdWxkIHlvdSBzZW5kIGFsb25nIHRoZSBvdXRwdXQgb2YgcGdf Y29uZmlnDQo+IGZvciB0aGUgYnVpbGQgeW91J3JlIHVzaW5nPw0KPiANCj4gCQkJcmVnYXJkcywg dG9tIGxhbmUNCg0K
I wrote: > I've now traced it to this bit in gindatapage.c: The attached patch (against 9.4) should be sufficient to fix this problem. Perhaps you can do some testing there while I'm doing the same. regards, tom lane diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c index 2090209..77725ac 100644 --- a/src/backend/access/gin/gindatapage.c +++ b/src/backend/access/gin/gindatapage.c @@ -86,7 +86,7 @@ typedef struct char action; ItemPointerData *modifieditems; - int nmodifieditems; + uint16 nmodifieditems; /* * The following fields represent the items in this segment. If 'items' is
Hi Tom, I've been testing this from some time and this appears to resolve the issue= . Thanks for the quick fix! Best regards, Spencer > On Sep 3, 2016, at 8:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >=20 > I wrote: >> I've now traced it to this bit in gindatapage.c: >=20 > The attached patch (against 9.4) should be sufficient to fix this problem= . > Perhaps you can do some testing there while I'm doing the same. >=20 > regards, tom lane >=20 > diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gi= n/gindatapage.c > index 2090209..77725ac 100644 > --- a/src/backend/access/gin/gindatapage.c > +++ b/src/backend/access/gin/gindatapage.c > @@ -86,7 +86,7 @@ typedef struct > char action; >=20 > ItemPointerData *modifieditems; > - int nmodifieditems; > + uint16 nmodifieditems; >=20 > /* > * The following fields represent the items in this segment. If 'items' = is