Обсуждение: App very unresponsive while performing simple update
Hi,
Is Postgres supposed to be able to handle concurrent requests while doing large updates?
This morning I was executing the following simple update statement that would affect 220,000 rows in my product table:
update product set is_hungry = 'true' where date_modified > current_date - 10;
But the application that accesses the product table for reading became very unresponsive while the update was happening.
Is it just a matter of slow I/O? The CPU usage seemed very low (less than 5%) and iostat showed less than 1 MB / sec throughput.
I was doing the update in psql.
Are there any settings that I could tweak that would help with this sort of thing?
Thanks,
____________________________________________________________________
Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com
ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB T2G 0V9
http://www.clickspace.com
____________________________________________________________________
Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com
ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB T2G 0V9
http://www.clickspace.com
Further to my issue, the update never did finish. I received the following message in psql:
ssprod=# update product set is_hungry = 'true' where date_modified > current_date - 10;
ERROR: deadlock detected
DETAIL: Process 18778 waits for ShareLock on transaction 711698780; blocked by process 15784.
Process 15784 waits for ShareLock on transaction 711697098; blocked by process 18778.
This is the second time I've tried to run this query without success.
Would changing the isolation level to serializable in my psql session help with this?
Thanks,
____________________________________________________________________
Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com
ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB T2G 0V9
http://www.clickspace.com
____________________________________________________________________
Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com
ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB T2G 0V9
http://www.clickspace.com
On May 28, 2006, at 3:37 AM, Brendan Duddridge wrote:
Hi,Is Postgres supposed to be able to handle concurrent requests while doing large updates?This morning I was executing the following simple update statement that would affect 220,000 rows in my product table:update product set is_hungry = 'true' where date_modified > current_date - 10;But the application that accesses the product table for reading became very unresponsive while the update was happening.Is it just a matter of slow I/O? The CPU usage seemed very low (less than 5%) and iostat showed less than 1 MB / sec throughput.I was doing the update in psql.Are there any settings that I could tweak that would help with this sort of thing?Thanks,
____________________________________________________________________
Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com
ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB T2G 0V9
http://www.clickspace.com
Brendan Duddridge <brendan@clickspace.com> writes: > Further to my issue, the update never did finish. I received the following > message in psql: > > ssprod=# update product set is_hungry = 'true' where date_modified > > current_date - 10; > ERROR: deadlock detected > DETAIL: Process 18778 waits for ShareLock on transaction 711698780; blocked > by process 15784. > Process 15784 waits for ShareLock on transaction 711697098; blocked by process > 18778. What queries are those two processes executing? And what foreign keys do you have on the product table or elsewhere referring to the product table? And what indexes do you have on those columns? I think this indicates you have foreign keys causing the deadlock. One process is waiting until an update elsewhere finishes before modifying a record that other update refers to via a foreign key. But that other process is waiting similarly for the first one. Do you have any foreign keys in other tables referring to the product table? Do you have indexes on those other tables? The update needs to check those other tables to make sure there are no references to the records you're updating. If there's no index it has to do a sequential scan. To get a deadlock I think you would need another update running somewhere though. -- greg
Greg Stark <gsstark@mit.edu> writes: > What queries are those two processes executing? And what foreign keys do you > have on the product table or elsewhere referring to the product table? And > what indexes do you have on those columns? And what PG version is this? Alvaro fixed the foreign-keys-take-exclusive-locks problem in 8.1 ... regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes: > Greg Stark <gsstark@mit.edu> writes: > > What queries are those two processes executing? And what foreign keys do you > > have on the product table or elsewhere referring to the product table? And > > what indexes do you have on those columns? > > And what PG version is this? Alvaro fixed the > foreign-keys-take-exclusive-locks problem in 8.1 ... Except I don't think this is taking an exclusive lock at all. The original post had the deadlock detection fire on a SharedLock. I think the other process is also an update and is holding an exclusive lock while also trying to acquire a SharedLock for a foreign key column. -- greg
Hi, Thanks for your replies. We are using PostgreSQL 8.1.3 on OS X Server. We do have foreign keys on other tables that reference the product table. Also, there will be updates going on at the same time as this update. When anyone clicks on a product details link, we issue an update statement to increment the click_count on the product. e.g. update product set click_count = click_count + 1; There are 1.2 million rows in this table and my update will affect 200,000 of them. We do have indexes on all foreign keys that reference the product table. Here's what our product table looks like: Table "public.product" Column | Type | Modifiers ------------------------------+-----------------------------+----------- click_count | integer | date_created | timestamp without time zone | not null date_modified | timestamp without time zone | date_of_last_keyphrase_match | timestamp without time zone | ean | character varying(32) | gtin | character varying(32) | home_category_id | integer | is_active | character varying(5) | is_featured | character varying(5) | is_hungry | character varying(5) | isbn | character varying(32) | manufacturer_id | integer | media_for_clipboard_id | integer | media_for_detail_id | integer | media_for_thumbnail_id | integer | mpn | character varying(512) | product_id | integer | not null status_code | character varying(32) | unsps_code | bigint | upc | character varying(32) | riding_id | integer | name_en | character varying(512) | name_fr | character varying(512) | short_description_en | character varying(2048) | short_description_fr | character varying(2048) | long_description_en | text | long_description_fr | text | Indexes: "product_pk" PRIMARY KEY, btree (product_id) "product__active_status_idx" btree (is_active, status_code) "product__additional_0__idx" btree (riding_id) "product__date_created_idx" btree (date_created) "product__date_modified_idx" btree (date_modified) "product__date_of_last_keyphrase_match_idx" btree (date_of_last_keyphrase_match) "product__home_category_id_fk_idx" btree (home_category_id) "product__hungry_idx" btree (is_hungry) "product__lower_name_en_idx" btree (lower(name_en::text)) "product__lower_name_fr_idx" btree (lower(name_fr::text)) "product__manufacturer_id_fk_idx" btree (manufacturer_id) "product__manufacturer_id_mpn_idx" btree (manufacturer_id, mpn) "product__media_for_clipboard_id_fk_idx" btree (media_for_clipboard_id) "product__media_for_detail_id_fk_idx" btree (media_for_detail_id) "product__media_for_thumbnail_id_fk_idx" btree (media_for_thumbnail_id) "product__upc_idx" btree (upc) "product_additional_2__idx" btree (is_active, status_code) WHERE is_active::text = 'true'::text AND status_code::text = 'complete'::text Foreign-key constraints: "product_homecategory_fk" FOREIGN KEY (home_category_id) REFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED "product_manufacturer_fk" FOREIGN KEY (manufacturer_id) REFERENCES manufacturer(manufacturer_id) DEFERRABLE INITIALLY DEFERRED "product_mediaforclipboard_fk" FOREIGN KEY (media_for_clipboard_id) REFERENCES media(media_id) DEFERRABLE INITIALLY DEFERRED "product_mediafordetail_fk" FOREIGN KEY (media_for_detail_id) REFERENCES media(media_id) DEFERRABLE INITIALLY DEFERRED "product_mediaforthumbnail_fk" FOREIGN KEY (media_for_thumbnail_id) REFERENCES media(media_id) DEFERRABLE INITIALLY DEFERRED ____________________________________________________________________ Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB T2G 0V9 http://www.clickspace.com On May 28, 2006, at 10:04 AM, Tom Lane wrote: > Greg Stark <gsstark@mit.edu> writes: >> What queries are those two processes executing? And what foreign >> keys do you >> have on the product table or elsewhere referring to the product >> table? And >> what indexes do you have on those columns? > > And what PG version is this? Alvaro fixed the > foreign-keys-take-exclusive-locks problem in 8.1 ... > > regards, tom lane > > ---------------------------(end of > broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match >
Greg Stark <gsstark@mit.edu> writes: > Tom Lane <tgl@sss.pgh.pa.us> writes: >> And what PG version is this? Alvaro fixed the >> foreign-keys-take-exclusive-locks problem in 8.1 ... > Except I don't think this is taking an exclusive lock at all. The original > post had the deadlock detection fire on a SharedLock. Yeah, but it was a ShareLock on a transaction ID, which is the trace of something doing XactLockTableWait, which is only done if we're blocking on a locked or updated-but-uncommitted row. Since Brendan says he's using 8.1, the FK theory is out, and I think what this probably is is a garden-variety deadlock on tuple updates, ie, two concurrent transactions tried to update the same tuples in different orders. regards, tom lane
Brendan Duddridge <brendan@clickspace.com> writes: > We do have foreign keys on other tables that reference the product table. > Also, there will be updates going on at the same time as this update. When > anyone clicks on a product details link, we issue an update statement to > increment the click_count on the product. e.g. update product set click_count > = click_count + 1; You should realize this will produce a lot of garbage records and mean you'll have to be running vacuum very frequently. You might consider instead of updating the main table inserting into a separate clickstream table. That trades off not getting instantaneous live totals with isolating the maintenance headache in a single place. That table will grow large but you can prune it at your leisure without impacting query performance on your main tables. > There are 1.2 million rows in this table and my update will affect 200,000 > of them. > > We do have indexes on all foreign keys that reference the product table. Well I suppose you had an update running concurrently against one of CATEGORY, MANUFACTURER, or MEDIA. Do any of those tables have a reference back to the product table? Is it possible to have a record with a reference back to the same record that refers to it? I think you're seeing the problem because these foreign keys are all initially deferred. That means you can update both tables and then can't commit either one because it needs to obtain a shared lock on the other record which is already locked for the update. I'm not certain that making them not deferred would actually eliminate the deadlock. It might just make it less likely. The deferred foreign key checks may also be related to the performance complaints. In my experience they're quite fast but I wonder what happens when you do a large batch update and then need to perform a whole slew of deferred foreign key checks. More likely you were blocking on some lock. Until that other query holding that lock tries to commit Postgres won't actually detect a deadlock, it'll just sit waiting until the lock becomes available. Also, you have a lot of indexes here. That alone will make updates pretty slow. -- greg
Tom Lane <tgl@sss.pgh.pa.us> writes: > Greg Stark <gsstark@mit.edu> writes: > > > Except I don't think this is taking an exclusive lock at all. The original > > post had the deadlock detection fire on a SharedLock. > > Yeah, but it was a ShareLock on a transaction ID, which is the trace > of something doing XactLockTableWait, which is only done if we're > blocking on a locked or updated-but-uncommitted row. Oops, didn't see this before I sent my last message. Brendan, in case it's not clear, in case of a conflict between my explanation and Tom's listen to Tom. :) -- greg
On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote: > Brendan Duddridge <brendan@clickspace.com> writes: > > > We do have foreign keys on other tables that reference the product table. > > Also, there will be updates going on at the same time as this update. When > > anyone clicks on a product details link, we issue an update statement to > > increment the click_count on the product. e.g. update product set click_count > > = click_count + 1; > > You should realize this will produce a lot of garbage records and mean you'll > have to be running vacuum very frequently. You might consider instead of > updating the main table inserting into a separate clickstream table. That > trades off not getting instantaneous live totals with isolating the > maintenance headache in a single place. That table will grow large but you can > prune it at your leisure without impacting query performance on your main > tables. Actually, you can still get instant results, you just have to hit two tables to do it. > More likely you were blocking on some lock. Until that other query holding > that lock tries to commit Postgres won't actually detect a deadlock, it'll > just sit waiting until the lock becomes available. Wow, are you sure that's how it works? I would think it would be able to detect deadlocks as soon as both processes are waiting on each other's locks. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
>> You should realize this will produce a lot of garbage records and >> mean you'll >> have to be running vacuum very frequently. You might consider >> instead of >> updating the main table inserting into a separate clickstream >> table. That >> trades off not getting instantaneous live totals with isolating the >> maintenance headache in a single place. That table will grow large >> but you can >> prune it at your leisure without impacting query performance on >> your main >> tables. We actually already have a table for this purpose. product_click_history > > Actually, you can still get instant results, you just have to hit two > tables to do it. Well, not really for our situation. We use the click_count on product to sort our product listings by popularity. Joining with our product_click_history to get live counts would be very slow. Some categories have many tens of thousands of products. Any joins outside our category_product table tend to be very slow. We'll probably have to write a process to update the click_count from querying our product_click_history table. ____________________________________________________________________ Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB T2G 0V9 http://www.clickspace.com On May 31, 2006, at 12:23 AM, Jim C. Nasby wrote: > On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote: >> Brendan Duddridge <brendan@clickspace.com> writes: >> >>> We do have foreign keys on other tables that reference the >>> product table. >>> Also, there will be updates going on at the same time as this >>> update. When >>> anyone clicks on a product details link, we issue an update >>> statement to >>> increment the click_count on the product. e.g. update product >>> set click_count >>> = click_count + 1; >> >> You should realize this will produce a lot of garbage records and >> mean you'll >> have to be running vacuum very frequently. You might consider >> instead of >> updating the main table inserting into a separate clickstream >> table. That >> trades off not getting instantaneous live totals with isolating the >> maintenance headache in a single place. That table will grow large >> but you can >> prune it at your leisure without impacting query performance on >> your main >> tables. > > Actually, you can still get instant results, you just have to hit two > tables to do it. > >> More likely you were blocking on some lock. Until that other query >> holding >> that lock tries to commit Postgres won't actually detect a >> deadlock, it'll >> just sit waiting until the lock becomes available. > > Wow, are you sure that's how it works? I would think it would be > able to > detect deadlocks as soon as both processes are waiting on each other's > locks. > -- > Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com > Pervasive Software http://pervasive.com work: 512-231-6117 > vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 > > ---------------------------(end of > broadcast)--------------------------- > TIP 5: don't forget to increase your free space map settings >
On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote: > We'll probably have to write a process to update the click_count from > querying our product_click_history table. How about an insert trigger on product_click_history which updates click_count every say 10000 transactions or so? jan -- -------------------------------------------------------------- Jan de Visser jdevisser@digitalfairway.com Baruk Khazad! Khazad ai-menu! --------------------------------------------------------------
On Wed, May 31, 2006 at 01:23:07 -0500, "Jim C. Nasby" <jnasby@pervasive.com> wrote: > On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote: > > Brendan Duddridge <brendan@clickspace.com> writes: > > More likely you were blocking on some lock. Until that other query holding > > that lock tries to commit Postgres won't actually detect a deadlock, it'll > > just sit waiting until the lock becomes available. > > Wow, are you sure that's how it works? I would think it would be able to > detect deadlocks as soon as both processes are waiting on each other's > locks. I don't see how it could wait for a commit. If a command is blocked waiting for a lock, how are you going to get a commit (you might get a rollback if the query is aborted)?
"Jim C. Nasby" <jnasby@pervasive.com> writes: > On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote: > > Brendan Duddridge <brendan@clickspace.com> writes: > > > > > We do have foreign keys on other tables that reference the product table. > > > Also, there will be updates going on at the same time as this update. When > > > anyone clicks on a product details link, we issue an update statement to > > > increment the click_count on the product. e.g. update product set click_count > > > = click_count + 1; > > > > You should realize this will produce a lot of garbage records and mean you'll > > have to be running vacuum very frequently. You might consider instead of > > updating the main table inserting into a separate clickstream table. That > > trades off not getting instantaneous live totals with isolating the > > maintenance headache in a single place. That table will grow large but you can > > prune it at your leisure without impacting query performance on your main > > tables. > > Actually, you can still get instant results, you just have to hit two > tables to do it. But that defeats the purpose of moving this traffic out to the clickstream table. The whole point is to avoid generating garbage records in your main table that you're doing a lot of real-time queries against. I would probably keep the clickstream table, then once a day or perhaps more often perform an aggregate query against it to generate a summary table (and then vacuum full or cluster it since it's half garbage). Then join from the main product table to the summary table to sort by popularity. If you need results that are more up-to-date than 24 hours and/or can't stand the downtime of the daily vacuum full on the summary table it becomes a lot harder. > > More likely you were blocking on some lock. Until that other query holding > > that lock tries to commit Postgres won't actually detect a deadlock, it'll > > just sit waiting until the lock becomes available. > > Wow, are you sure that's how it works? I would think it would be able to > detect deadlocks as soon as both processes are waiting on each other's > locks. I didn't mean to describe the general situation, just what I suspected was happening in this case. The user had a large batch update that was performing poorly. I suspect it may have been performing poorly because it was spending time waiting to acquire an exclusive lock. There would be no deadlock yet, just very slow updates. However the other client updating the other table has deferred foreign key constraints back to the table the big update is acquiring all these exclusive locks. Locks for deferred constraints aren't taken until they're checked. So the actual deadlock doesn't occur until the commit occurs. In any case Tom said I was misunderstanding the deadlock message he posted. The kind of situation I'm talking about would look something like this: stark=> begin; BEGIN stark=> begin; BEGIN stark=> update t1 set a = 0; UPDATE 1 stark=> update t1 set a = 1; UPDATE 1 stark=> update t2 set b = 0; UPDATE 1 stark=> update t2 set b = 2; UPDATE 1 stark=> commit; stark=> commit; ERROR: deadlock detected DETAIL: Process 16531 waits for ShareLock on transaction 245131; blocked by process16566 Process 16566 waits for ShareLock on transaction 245132; blocked by process 16531. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."t1" x WHERE "a" = $1 FOR SHAREOF x" stark=> > COMMIT stark=> \d t1 Table "public.t1" Column | Type | Modifiers --------+---------+----------- a | integer | not null b | integer | Indexes: "t1_pkey" PRIMARY KEY, btree (a) Foreign-key constraints: "fk" FOREIGN KEY (b) REFERENCES t2(b) DEFERRABLE INITIALLY DEFERRED stark=> \d t2 Table "public.t2" Column | Type | Modifiers --------+---------+----------- a | integer | b | integer | not null Indexes: "t2_pkey" PRIMARY KEY, btree (b) Foreign-key constraints: "fk" FOREIGN KEY (a) REFERENCES t1(a) DEFERRABLE INITIALLY DEFERRED
Hi Jan, That sounds like a great idea! How would you control the update to occur only every 10,000 transactions? Is there a trigger setting for that somewhere? Thanks, ____________________________________________________________________ Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB T2G 0V9 http://www.clickspace.com On May 31, 2006, at 6:34 AM, Jan de Visser wrote: > On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote: >> We'll probably have to write a process to update the click_count from >> querying our product_click_history table. > > How about an insert trigger on product_click_history which updates > click_count > every say 10000 transactions or so? > > jan > > -- > -------------------------------------------------------------- > Jan de Visser jdevisser@digitalfairway.com > > Baruk Khazad! Khazad ai-menu! > -------------------------------------------------------------- > > ---------------------------(end of > broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq >
On Wednesday 31 May 2006 13:34, Brendan Duddridge wrote: > Hi Jan, > > That sounds like a great idea! How would you control the update to > occur only every 10,000 transactions? > > Is there a trigger setting for that somewhere? I was thinking something like IF count(*) % 10000 = 0 then ... do stuff ... end if Problem may be that that may be a bit expensive; maybe better to have a sequence and use the sequence value. Or something like that. Also, maybe you should do the actual update of click_count not in the trigger itself, but have the trigger do a NOTIFY and have another process do a LISTEN. Depends how long the update takes. jan > > Thanks, > > ____________________________________________________________________ > Brendan Duddridge | CTO | 403-277-5591 x24 | brendan@clickspace.com > > ClickSpace Interactive Inc. > Suite L100, 239 - 10th Ave. SE > Calgary, AB T2G 0V9 > > http://www.clickspace.com > > On May 31, 2006, at 6:34 AM, Jan de Visser wrote: > > On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote: > >> We'll probably have to write a process to update the click_count from > >> querying our product_click_history table. > > > > How about an insert trigger on product_click_history which updates > > click_count > > every say 10000 transactions or so? > > > > jan > > > > -- > > -------------------------------------------------------------- > > Jan de Visser jdevisser@digitalfairway.com > > > > Baruk Khazad! Khazad ai-menu! > > -------------------------------------------------------------- > > > > ---------------------------(end of > > broadcast)--------------------------- > > TIP 3: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/docs/faq -- -------------------------------------------------------------- Jan de Visser jdevisser@digitalfairway.com Baruk Khazad! Khazad ai-menu! --------------------------------------------------------------
On Wed, May 31, 2006 at 11:24:05AM -0400, Greg Stark wrote: > stark=> begin; > BEGIN > stark=> begin; > BEGIN > stark=> update t1 set a = 0; > UPDATE 1 > stark=> update t1 set a = 1; > UPDATE 1 > > stark=> update t2 set b = 0; > UPDATE 1 > stark=> update t2 set b = 2; > UPDATE 1 > stark=> commit; > stark=> commit; > ERROR: deadlock detected > DETAIL: Process 16531 waits for ShareLock on transaction 245131; blocked by process16566 > Process 16566 waits for ShareLock on transaction 245132; blocked by process 16531. > CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."t1" x WHERE "a" = $1 FOR SHAREOF x" > stark=> > > COMMIT I tried duplicating this but couldn't. What's the data in the tables? -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
"Jim C. Nasby" <jnasby@pervasive.com> writes: > I tried duplicating this but couldn't. What's the data in the tables? Sorry, I had intended to include the definition and data: stark=> create table t1 (a integer primary key, b integer); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t1_pkey" for table "t1" CREATE TABLE stark=> create table t2 (a integer, b integer primary key); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t2_pkey" for table "t2" CREATE TABLE stark=> insert into t1 values (1,2); INSERT 0 1 stark=> insert into t2 values (1,2); INSERT 0 1 stark=> alter table t1 add constraint fk foreign key (b) references t2 deferrable initially deferred ; ALTER TABLE stark=> alter table t2 add constraint fk foreign key (a) references t1 deferrable initially deferred ; ALTER TABLE stark=> \d t1 Table "public.t1" Column | Type | Modifiers --------+---------+----------- a | integer | not null b | integer | Indexes: "t1_pkey" PRIMARY KEY, btree (a) Foreign-key constraints: "fk" FOREIGN KEY (b) REFERENCES t2(b) DEFERRABLE INITIALLY DEFERRED stark=> \d t2 Table "public.t2" Column | Type | Modifiers --------+---------+----------- a | integer | b | integer | not null Indexes: "t2_pkey" PRIMARY KEY, btree (b) Foreign-key constraints: "fk" FOREIGN KEY (a) REFERENCES t1(a) DEFERRABLE INITIALLY DEFERRED stark=> select * from t1; a | b ---+--- 1 | 2 (1 row) stark=> select * from t2; a | b ---+--- 1 | 2 (1 row) -- greg