Обсуждение: deleting a foreign key that has no references
Hello, I'm using 8.1.8, and I have a situation where a record in one table is only meaningful when it is referenced via foreign key by one or more records in any one of several tables. So, really what I want is when one of the referring records is deleted, to have a trigger check to see if it was the last one to use that foreign key, and if so, to delete that other record, too. My first implementation of this functionality was to write a trigger function that executed a COUNT(*) on all of the tables that could have a reference in them. That became way too slow for the number of records in these tables. Then, I tried to setting ON DELETE RESTRICT or NO ACTION on the foreign key constraint, and then trying to catch the exception thrown when a deletion attempt is made on the record. However, it seems that this PL/pgsql snippet fails to catch such an error: BEGIN EXCEPTION WHEN RAISE_EXCEPTION THEN RETURN NULL; WHEN OTHERS THEN RETURN NULL; END; But, really, I just want to be able to test to see how many references there are to a key. Is there some way to do that? Thank you, Glen Mabey
On 3/19/07, Glen W. Mabey <Glen.Mabey@swri.org> wrote: > But, really, I just want to be able to test to see how many references > there are to a key. Is there some way to do that? write a triggers which do that. depesz -- http://www.depesz.com/ - nowy, lepszy depesz
On Mon, Mar 19, 2007 at 04:51:57PM +0100, hubert depesz lubaczewski wrote: > On 3/19/07, Glen W. Mabey <Glen.Mabey@swri.org> wrote: > >I'm using 8.1.8, and I have a situation where a record in one table > >is > >only meaningful when it is referenced via foreign key by one or more > >records in any one of several tables. > > > >So, really what I want is when one of the referring records is > >deleted, > >to have a trigger check to see if it was the last one to use that > >foreign key, and if so, to delete that other record, too. > > > >My first implementation of this functionality was to write a trigger > >function that executed a COUNT(*) on all of the tables that could > >have a > >reference in them. That became way too slow for the number of > >records > >in these tables. > > > >Then, I tried to setting ON DELETE RESTRICT or NO ACTION on the > >foreign > >key constraint, and then trying to catch the exception thrown when a > >deletion attempt is made on the record. However, it seems that this > >PL/pgsql snippet fails to catch such an error: > > > > BEGIN EXCEPTION > > WHEN RAISE_EXCEPTION THEN > > RETURN NULL; > > WHEN OTHERS THEN > > RETURN NULL; > > END; > > > > But, really, I just want to be able to test to see how many > > references there are to a key. Is there > > some way to do that? > > > > write a triggers which do that. I understand that a trigger should be written, and I have already implemented two such triggers, as described above. What I'm hoping to find out is whether there is some way to directly find out how many (using a SELECT query) references there are to a key. Glen
On Mon, 19 Mar 2007, Glen W. Mabey wrote: > Hello, > > I'm using 8.1.8, and I have a situation where a record in one table is > only meaningful when it is referenced via foreign key by one or more > records in any one of several tables. > > So, really what I want is when one of the referring records is deleted, > to have a trigger check to see if it was the last one to use that > foreign key, and if so, to delete that other record, too. > > My first implementation of this functionality was to write a trigger > function that executed a COUNT(*) on all of the tables that could have a > reference in them. That became way too slow for the number of records > in these tables. > > Then, I tried to setting ON DELETE RESTRICT or NO ACTION on the foreign > key constraint, and then trying to catch the exception thrown when a > deletion attempt is made on the record. However, it seems that this > PL/pgsql snippet fails to catch such an error: > > BEGIN EXCEPTION > WHEN RAISE_EXCEPTION THEN > RETURN NULL; > WHEN OTHERS THEN > RETURN NULL; > END; Was that the actual function you used or just a shortened version? A function like that with a delete of the referenced table in the body for the appropriate key appeared to have reasonable behavior on my 8.2 system with an immediate constraint, but I didn't do very much testing. One issue is that to test the insert of a row into the referenced table you'd probably need to defer a check that the row is referenced in order to have time to insert referencing rows. > But, really, I just want to be able to test to see how many references > there are to a key. Is there some way to do that? Currently, not apart from selecting on the referencing table.
"Glen W. Mabey" <Glen.Mabey@swri.org> writes: > What I'm hoping to find out is whether there is some way to directly > find out how many (using a SELECT query) references there are to a key. There's no hidden shortcut for that, no. regards, tom lane
Glen W. Mabey wrote: > On Mon, Mar 19, 2007 at 04:51:57PM +0100, hubert depesz lubaczewski wrote: > >> On 3/19/07, Glen W. Mabey <Glen.Mabey@swri.org> wrote: >> >>> I'm using 8.1.8, and I have a situation where a record in one table >>> is >>> only meaningful when it is referenced via foreign key by one or more >>> records in any one of several tables. >>> >>> So, really what I want is when one of the referring records is >>> deleted, >>> to have a trigger check to see if it was the last one to use that >>> foreign key, and if so, to delete that other record, too. >>> >>> My first implementation of this functionality was to write a trigger >>> function that executed a COUNT(*) on all of the tables that could >>> have a >>> reference in them. That became way too slow for the number of >>> records >>> in these tables. >>> >>> Then, I tried to setting ON DELETE RESTRICT or NO ACTION on the >>> foreign >>> key constraint, and then trying to catch the exception thrown when a >>> deletion attempt is made on the record. However, it seems that this >>> PL/pgsql snippet fails to catch such an error: >>> >>> BEGIN EXCEPTION >>> WHEN RAISE_EXCEPTION THEN >>> RETURN NULL; >>> WHEN OTHERS THEN >>> RETURN NULL; >>> END; >>> >>> But, really, I just want to be able to test to see how many >>> references there are to a key. Is there >>> some way to do that? >>> >>> >> write a triggers which do that. >> > > I understand that a trigger should be written, and I have already > implemented two such triggers, as described above. > > What I'm hoping to find out is whether there is some way to directly > find out how many (using a SELECT query) references there are to a key. > This query will return the list of foreign keys which refer to primary keys: SELECT g as "DB",n.nspname as "PK_schema",pc.relname as "PK_table",pa.attname as "PK_column", n.nspname as "FK_schema",c.relname as "FK_table",a.attname as "FK_column",b.n as "FK_column_number", f.conname as "FK_name", pr.conname as "PK_name" FROM current_database()g,pg_catalog.pg_attribute a,pg_catalog.pg_attribute pa,pg_catalog.pg_class c,pg_catalog.pg_class pc,pg_catalog.pg_namespace n, pg_catalog.pg_namespace pn,pg_catalog.pg_constraint f left join pg_catalog.pg_constraint pr on(f.conrelid=pr.conrelid and pr.contype='p'), (SELECT * FROM generate_series(1,current_setting('max_index_keys')::int,1))b(n) WHERE n.oid=c.relnamespace AND pn.oid=pc.relnamespace AND pc.oid=f.confrelid AND c.oid=f.conrelid AND pa.attrelid=f.confrelid AND a.attrelid=f.conrelid AND pa.attnum=f.confkey[b.n]AND a.attnum=f.conkey[b.n]AND f.contype='f'AND f.conkey[b.n]<>0 AND has_schema_privilege(n.oid, 'USAGE'::text); Add conditions to the pr.conname and you will get what you need
On 3/19/07, Glen W. Mabey <Glen.Mabey@swri.org> wrote: > > write a triggers which do that. > I understand that a trigger should be written, and I have already > implemented two such triggers, as described above. no, i think i didn't make myself clear. let's use this situation: we have tables: create table x (id serial primary key, some_text text); create table y (id serial primary key, x_id int4 not null references x (id), some_field text); where table x is your table in which you want to make some deletes, and table y is some table that has foreign key to it. now, you add to table x a field: alter table x add column refcount int4 not null default 0; and then we add a trigger: CREATE OR REPLACE FUNCTION some_trg() RETURNS TRIGGER AS $BODY$ DECLARE BEGIN IF TG_OP = 'INSERT' THEN UPDATE x SET refcount = refcount + 1 WHERE id = NEW.x_id; ELSIF TG_OP = 'UPDATE' THEN IF NEW.x_id <> OLD.x_id THEN UPDATE x SET refcount = refcount + 1 WHERE id = NEW.x_id; UPDATE x SET refcount = refcount - 1 WHERE id = OLD.x_id; END IF; ELSIF TG_OP = 'DELETE' THEN UPDATE x SET refcount = refcount - 1 WHERE id = OLD.x_id; END IF; RETURN NULL; END; $BODY$ LANGUAGE 'plpgsql'; CREATE TRIGGER some_trg AFTER INSERT OR UPDATE OR DELETE ON y FOR EACH ROW EXECUTE PROCEDURE some_trg(); then - you have to populate the refcount field with current value, but this is easily doable, and as far as i know you already are doing it in your code. so - the trigger keeps the refcount up to date. it is quite lightweight, so shouldn't be a problem. and what's more important - size of the table trigger is on doesn't matter. simple, and working. depesz -- http://www.depesz.com/ - nowy, lepszy depesz
On Mon, Mar 19, 2007 at 09:46:22AM -0700, Stephan Szabo wrote: > On Mon, 19 Mar 2007, Glen W. Mabey wrote: > > > Hello, > > > > I'm using 8.1.8, and I have a situation where a record in one table is > > only meaningful when it is referenced via foreign key by one or more > > records in any one of several tables. > > > > So, really what I want is when one of the referring records is deleted, > > to have a trigger check to see if it was the last one to use that > > foreign key, and if so, to delete that other record, too. > > > > My first implementation of this functionality was to write a trigger > > function that executed a COUNT(*) on all of the tables that could have a > > reference in them. That became way too slow for the number of records > > in these tables. > > > > Then, I tried to setting ON DELETE RESTRICT or NO ACTION on the foreign > > key constraint, and then trying to catch the exception thrown when a > > deletion attempt is made on the record. However, it seems that this > > PL/pgsql snippet fails to catch such an error: > > > > BEGIN EXCEPTION > > WHEN RAISE_EXCEPTION THEN > > RETURN NULL; > > WHEN OTHERS THEN > > RETURN NULL; > > END; > > Was that the actual function you used or just a shortened version? A > function like that with a delete of the referenced table in the body for > the appropriate key appeared to have reasonable behavior on my 8.2 system > with an immediate constraint, but I didn't do very much testing. One issue > is that to test the insert of a row into the referenced table you'd > probably need to defer a check that the row is referenced in order to have > time to insert referencing rows. Okay, it turns out that I only had not implemented the exception catch appropriately. Here's what worked: BEGIN DELETE FROM "Cuts" WHERE "Cuts".id = OLD.cut_id; EXCEPTION WHEN OTHERS THEN NULL; END; RETURN NULL; andyk: Thank you for the SELECT string you contributed. Unfortunately, I could not understand what it was doing -- it was way over my head WRT psql proficiency. So, I don't know whether it would have worked. At any rate, thank you all for your suggestions. Testing for an error seems to be the simplest and easiest way to accomplish what I need to do, and it seems to be fairly fast, too. Best Regards, Glen Mabey
On Mon, 2007-03-19 at 11:12 -0500, Glen W. Mabey wrote: > What I'm hoping to find out is whether there is some way to directly > find out how many (using a SELECT query) references there are to a > key. In the easy case when your schema doesn't change often, you can just hard code a query of the FK tables and add up the row counts. I bet something like 'select count(*) from (select * from FKtable1 UNION ALL select * from FKtable2 ... )' will work (and I'm guessing that the UNION ALL will optimize well). Obviously, you want indexes on the FKs. The harder and more general case is to build such a query dynamically from pg_depends. A good start would be to write a function that returns an sql query like the above to count the referents of PKtable(PKcolumn). If you can declare this function stable or immutable (I'm not sure of this), then it might not be too painful to generate the query within the trigger itself. Otherwise, you might have to store/update these queries in a separate table after every DDL change. See the pg_depends documentation at http://www.postgresql.org/docs/8.2/interactive/catalog-pg-depend.html -Reece -- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0 ./universe -G 6.672e-11 -e 1.602e-19 -protonmass 1.673e-27 -uspres bush kernel warning: universe consuming too many resources. Killing. universe killed due to catastrophic leadership. Try -uspres carter.
On Mon, 2007-03-19 at 13:03 -0700, Reece Hart wrote:
Apologies. I intended to write pg_constraint and the documentation at
http://www.postgresql.org/docs/8.2/interactive/catalog-pg-constraint.html
The harder and more general case is to build such a query dynamically from pg_depends...
See the pg_depends documentation at
Apologies. I intended to write pg_constraint and the documentation at
http://www.postgresql.org/docs/8.2/interactive/catalog-pg-constraint.html
-- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0 ./universe -G 6.672e-11 -e 1.602e-19 -protonmass 1.673e-27 -uspres bush kernel warning: universe consuming too many resources. Killing. universe killed due to catastrophic leadership. Try -uspres carter. |
Hi all, I am inserting 783159 records but the insert is failing after 634314 records. I am getting "ERROR: could not open relation with OID 3221204992" message. I am using 1- PostgreSQL: 8.2.3 2- OS: Red Hat Enterprise Linux AS release 3. 3- Logfile output: ERROR: XX000: could not open relation with OID 3221204992 LOCATION: relation_open, heapam.c:700 STATEMENT: INSERT INTO parcel (OBJECTID, shape) VALUES ( $1, $2 ) This is happening only on Linux , on Windows (pg 8.2.1), I can insert all data. I have gone through all the hints on the list for this thread and tried most of them, like - reindex pg_class - set enable_indexscan = off; - increase shared_buffers /temp_buffers - SELECT oid, relname, relkind FROM pg_catalog.pg_class WHERE oid <= 3221204992 ORDER BY oid DESC LIMIT 6; There is no temporary table and no table is being dropped / created. No record is returned for select oid, * from pg_class where oid = 3221204992; Wondering if somebody have some other hints to resolve this problem? Thanks. Ale Raza.
araza@esri.com writes: > I am inserting 783159 records but the insert is failing after 634314 > records. I am getting "ERROR: could not open relation with OID > 3221204992" message. I am using > 1- PostgreSQL: 8.2.3 > 2- OS: Red Hat Enterprise Linux AS release 3. > 3- Logfile output: > ERROR: XX000: could not open relation with OID 3221204992 > LOCATION: relation_open, heapam.c:700 > STATEMENT: INSERT INTO parcel (OBJECTID, shape) VALUES ( $1, $2 ) Please provide more detail, like what the table schema is, what indexes and foreign keys it has, and exactly what the insert process is (eg do you have all these inserts wrapped in a BEGIN?). Also what is the client-side software? Is the OID mentioned in the complaint the same every time you try it? regards, tom lane
Tom, It's a C API client and inserts are not wrapped into a BEGIN/END. Here is the schema info: sde93=# \d parcel Table "sde.parcel" Column | Type | Modifiers ----------+-------------+----------- objectid | integer | not null shape | st_geometry | st_geometry is our own implementation for geometry type. In a loading mode it has no index. Initially, when table is created, a GiST index is created for shape and a btree for objectid but both are dropped for loading. Client is reading data from ESRI personal GeoDatabase and inserting into parcel table. The OID is same every time. Thanks. Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, March 21, 2007 4:15 PM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 araza@esri.com writes: > I am inserting 783159 records but the insert is failing after 634314 > records. I am getting "ERROR: could not open relation with OID > 3221204992" message. I am using > 1- PostgreSQL: 8.2.3 > 2- OS: Red Hat Enterprise Linux AS release 3. > 3- Logfile output: > ERROR: XX000: could not open relation with OID 3221204992 > LOCATION: relation_open, heapam.c:700 > STATEMENT: INSERT INTO parcel (OBJECTID, shape) VALUES ( $1, $2 ) Please provide more detail, like what the table schema is, what indexes and foreign keys it has, and exactly what the insert process is (eg do you have all these inserts wrapped in a BEGIN?). Also what is the client-side software? Is the OID mentioned in the complaint the same every time you try it? regards, tom lane
araza@esri.com writes: > Here is the schema info: > ... > shape | st_geometry | > st_geometry is our own implementation for geometry type. To be blunt, I'd suggest looking there first. Can you duplicate the failure when loading into a table with no custom datatype? regards, tom lane
Tom, No problem without shape column. I can load all data. # SELECT oid, relname, relkind FROM pg_catalog.pg_class WHERE oid <= 3221204992 ORDER BY oid DESC LIMIT 6; oid | relname | relkind -------+----------------------+--------- 78036 | pg_toast_78032_index | i 78034 | pg_toast_78032 | t 78032 | parcel | r 78031 | pg_toast_78027_index | i 78029 | pg_toast_78027 | t 78027 | parcel_t | r (6 rows) Table parcel_t: without shape. Table parcel: with shape. My concern is: - Why it's an issue on Linux not on windows? In both cases it's a window client. - What shape column making insert fail? Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Thursday, March 22, 2007 8:02 AM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 araza@esri.com writes: > Here is the schema info: > ... > shape | st_geometry | > st_geometry is our own implementation for geometry type. To be blunt, I'd suggest looking there first. Can you duplicate the failure when loading into a table with no custom datatype? regards, tom lane
"Ale Raza" <araza@esri.com> writes: > No problem without shape column. I can load all data. What I thought :-( > My concern is: > - Why it's an issue on Linux not on windows? In both cases it's a window > client. Platform-dependent bug in your code, likely. Without seeing the code it's impossible to speculate much further, but I'd look first for places that scribble on memory not allocated to you (perhaps due to a miscalculation of the size needed for a dynamically-allocated object). regards, tom lane
Tom, The _Recv function is not receiving the data when client is sending bytea of size 211k. Here is the stack: Breakpoint 3, #####_Recv (fcinfo=0xbfffa3a0) at binary.c:138 138 StringInfo src_buf = (StringInfo) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); (gdb) p *src_buf Cannot access memory at address 0x0 (gdb) n 0x0819c4f8 in PostgresMain () (gdb) bt #0 0x0819c4f8 in PostgresMain () #1 0x0817a4d9 in BackendRun () #2 0x08179e03 in BackendStartup () #3 0x0817825f in ServerLoop () #4 0x0817785b in PostmasterMain () #5 0x0813fd1c in main () (gdb) c Continuing. Am I missing some parameters or it's a bug? Thanks. Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Thursday, March 22, 2007 11:41 AM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 "Ale Raza" <araza@esri.com> writes: > No problem without shape column. I can load all data. What I thought :-( > My concern is: > - Why it's an issue on Linux not on windows? In both cases it's a window > client. Platform-dependent bug in your code, likely. Without seeing the code it's impossible to speculate much further, but I'd look first for places that scribble on memory not allocated to you (perhaps due to a miscalculation of the size needed for a dynamically-allocated object). regards, tom lane
"Ale Raza" <araza@esri.com> writes: > Breakpoint 3, #####_Recv (fcinfo=0xbfffa3a0) at binary.c:138 > 138 StringInfo src_buf = (StringInfo) > PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); > (gdb) p *src_buf > Cannot access memory at address 0x0 At the point where you've stopped, src_buf hasn't been assigned to yet. regards, tom lane
" ....src_buf hasn't been assigned to yet. ..." May be copy/paste issue. It has been assigned. Here is the complete stack for bytea size 7480 and 211758.The later fail. Pass (bytea size = 7480): Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 138 StringInfo src_buf = (StringInfo) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); (gdb) p *src_buf $2 = {data = 0xff0c8d8b <Address 0xff0c8d8b out of bounds>, len = -1031143425, maxlen = -13595335, cursor = 158662655} (gdb) n 139 SE_ST_GEOMETRY *result = NULL; (gdb) p *src_buf $3 = {data = 0x84ce6b2 "º\022\b\003", len = 7480, maxlen = 7481, cursor = 0} (gdb) bt #0 ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:139 #1 0x08209ebd in ReceiveFunctionCall () #2 0x08209fde in OidReceiveFunctionCall () #3 0x0819a5a0 in exec_bind_message () #4 0x0819c6e6 in PostgresMain () #5 0x0817a4d9 in BackendRun () #6 0x08179e03 in BackendStartup () #7 0x0817825f in ServerLoop () #8 0x0817785b in PostmasterMain () #9 0x0813fd1c in main () (gdb) c Continuing. Fail (bytea size = 211758): Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 138 StringInfo src_buf = (StringInfo) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); (gdb) p *src_buf $4 = {data = 0xff0c8d8b <Address 0xff0c8d8b out of bounds>, len = -1031143425, maxlen = -13595335, cursor = 158662655} (gdb) n 0x0819c4f8 in PostgresMain () (gdb) p *src_buf No symbol "src_buf" in current context. (gdb) bt #0 0x0819c4f8 in PostgresMain () #1 0x0817a4d9 in BackendRun () #2 0x08179e03 in BackendStartup () #3 0x0817825f in ServerLoop () #4 0x0817785b in PostmasterMain () #5 0x0813fd1c in main () (gdb) The pointer moves to PostgresMain (), as soon as I move to next. Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Friday, March 23, 2007 2:20 PM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 "Ale Raza" <araza@esri.com> writes: > Breakpoint 3, #####_Recv (fcinfo=0xbfffa3a0) at binary.c:138 > 138 StringInfo src_buf = (StringInfo) > PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); > (gdb) p *src_buf > Cannot access memory at address 0x0 At the point where you've stopped, src_buf hasn't been assigned to yet. regards, tom lane
"Ale Raza" <araza@esri.com> writes: > Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 > 138 StringInfo src_buf = (StringInfo) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); > (gdb) p *src_buf > $4 = {data = 0xff0c8d8b <Address 0xff0c8d8b out of bounds>, len = -1031143425, maxlen = -13595335, cursor = 158662655} > (gdb) n > 0x0819c4f8 in PostgresMain () > (gdb) p *src_buf > No symbol "src_buf" in current context. Try recompiling with a lower optimization level (maybe even -O0) so you can debug. gdb sometimes gets confused by optimized code... regards, tom lane
Same results with -O0 option. Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffd4d0) at binary.c:138 138 in binary.c (gdb) bt #0 ST_Geometry_Recv (fcinfo=0xbfffd4d0) at binary.c:138 #1 0x08209ebd in ReceiveFunctionCall () #2 0x08209fde in OidReceiveFunctionCall () #3 0x0819a5a0 in exec_bind_message () #4 0x0819c6e6 in PostgresMain () #5 0x0817a4d9 in BackendRun () #6 0x08179e03 in BackendStartup () #7 0x0817825f in ServerLoop () #8 0x0817785b in PostmasterMain () #9 0x0813fd1c in main () (gdb) p *src_buf $3 = {data = 0xff0c8d8b <Address 0xff0c8d8b out of bounds>, len = -1031143425, maxlen = -13595335, cursor = 158662655} (gdb) n 0x0819c4f8 in PostgresMain () (gdb) p *src_buf No symbol "src_buf" in current context. (gdb) bt #0 0x0819c4f8 in PostgresMain () #1 0x0817a4d9 in BackendRun () #2 0x08179e03 in BackendStartup () #3 0x0817825f in ServerLoop () #4 0x0817785b in PostmasterMain () #5 0x0813fd1c in main () (gdb) c Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Friday, March 23, 2007 4:08 PM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 "Ale Raza" <araza@esri.com> writes: > Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 > 138 StringInfo src_buf = (StringInfo) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); > (gdb) p *src_buf > $4 = {data = 0xff0c8d8b <Address 0xff0c8d8b out of bounds>, len = -1031143425, maxlen = -13595335, cursor = 158662655} > (gdb) n > 0x0819c4f8 in PostgresMain () > (gdb) p *src_buf > No symbol "src_buf" in current context. Try recompiling with a lower optimization level (maybe even -O0) so you can debug. gdb sometimes gets confused by optimized code... regards, tom lane
"Ale Raza" <araza@esri.com> writes: > Same results with -O0 option. Hmmm ... I guess I should have stopped to read the code a little closer: >> Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 >> 138 StringInfo src_buf = (StringInfo) > PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); Where did you get that from? Receive functions are supposed to do this: StringInfo buf = (StringInfo) PG_GETARG_POINTER(0); StringInfos aren't a toastable datatype. The loss of control probably represents the toast code throwing an error ... regards, tom lane
Just bringing back to life a message I sent last July. The problem I was having was that when importing very large data sets, COPY seemed to drop some data. I built a script to use INSERTs, and same problem. My server runs 8.1.3 on Linux. Several people investigated, Reece Hart was unable to reproduce the problem using my same data file, and Tom Lane suggested a buggy client installation dropping info, or hardware problems in the server. I've come back to this problem recently, and have found a couple of interesting things. I'm using a 418MB data file. wc -l gives me 6,802,367 lines On the server (Linux running Postgres 2.1.3). 4GB RAM, and the disk Postgres lives in is a network drive. After copying the data file to the server and running psql locally. - Import try 1: count(*) gives 6,801,664 - Import try 2: count(*) gives 6,802,241 - Every import gave a different count The log file doesn't say anything other than LOG: checkpoints are occurring too frequently (27 seconds apart), but that's should not be a problem, right? I also tried it in my local-disk Windows installation, which runs Postgres 8.1.0 - Import try 1: count(*) gives 6,824,366 - All imports since then give, correctly, count(*) = 6,802,367 This time, I FTP'd the file to my mac laptop, which is running 8.2.3. The import gave me the correct count every time. Remembering the suggestion of hardware problems, I made an empty install of 8.2.3 on my server. On it, the data file was fully imported every single time. I decided to copy all my data to 8.2.3 to test dependency of the bug on a) size of the database (30GB of data) or b) some problem with data or some stored procedure. I've tried the import, and again, it's correct every single time. During the import of the database data to 8.2.3, I got mostly a clean set, except for the 3 error messages listed below which suggest a buggy kernel. A bunch of questions: 1) Has anybody seen this type of behavior in 8.1.*, is there a known bug that might explain the problems in both the Linux server and the Windows box? 2) Those errors on the database import are troubling, but the affected only 3 tables. Wouldn't a buggy kernel give more trouble? Is it possible that this was a bug in 8.1.3's pg_dump? Also, since the database is so big, the output of pg_dump is split'd and bzip2'd, so there's room for error there too. 3) On Friday I'm going to upgrade the production database from 8.1.3 to 8.2.3. Any caveats or words of advice? Thank you, Jaime ERROR: unexpected data beyond EOF in block 23662 of relation "portfolio" HINT: This has been seen to occur with buggy kernels; consider updating your system. CONTEXT: COPY portfolio, line 3426949: "210395 1974263 1 723" STATEMENT: COPY portfolio (deal_id, security_id, amount, portfolio_version) FROM stdin; ERROR: unexpected data beyond EOF in block 4028 of relation "coverage_test_val" HINT: This has been seen to occur with buggy kernels; consider updating your system. CONTEXT: COPY coverage_test_val, line 338035: "340676 588 2006-08-25 STEPUP_TRIGGER2 6 0.0 0.0 \ 7" STATEMENT: COPY coverage_test_val (coverage_test_val_id, deal_id, observation_date, coverage_test_name, coverage_test\ _priority, coverage_test_value, coverage_test_trigger, coverage_test_type_id) FROM stdin; ERROR: unexpected data beyond EOF in block 4049 of relation "deal_current_val" HINT: This has been seen to occur with buggy kernels; consider updating your system. CONTEXT: COPY deal_current_val, line 511050: "612884 7008 2005-09-21 23 1.957871" STATEMENT: COPY deal_current_val (deal_current_val_id, deal_id, observation_date, type_id, deal_current_val) FROM std\ in; *********************************************************************** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. Bear Stearns does not provide tax, legal or accounting advice. You should consult your own tax, legal and accounting advisors before engaging in any transaction. In order for Bear Stearns to comply with Internal Revenue Service Circular 230 (if applicable), you are notified that any discussion of U.S. federal tax issues contained or referred to herein is not intended or written to be used, and cannot be used, for the purpose of: (A) avoiding penalties that may be imposed under the Internal Revenue Code; nor (B) promoting, marketing or recommending to another party any transaction or matter addressed herein. ***********************************************************************
Correction: my sever is running 8.1.3 Jaime Silvela wrote: > Just bringing back to life a message I sent last July. > > The problem I was having was that when importing very large data sets, > COPY seemed to drop some data. I built a script to use INSERTs, and > same problem. My server runs 8.1.3 on Linux. Several people > investigated, Reece Hart was unable to reproduce the problem using my > same data file, and Tom Lane suggested a buggy client installation > dropping info, or hardware problems in the server. > > I've come back to this problem recently, and have found a couple of > interesting things. > I'm using a 418MB data file. wc -l gives me 6,802,367 lines > > On the server (Linux running Postgres 2.1.3). 4GB RAM, and the disk > Postgres lives in is a network drive. > After copying the data file to the server and running psql locally. > - Import try 1: count(*) gives 6,801,664 > - Import try 2: count(*) gives 6,802,241 > - Every import gave a different count > The log file doesn't say anything other than > LOG: checkpoints are occurring too frequently (27 seconds apart), but > that's should not be a problem, right? > > I also tried it in my local-disk Windows installation, which runs > Postgres 8.1.0 > - Import try 1: count(*) gives 6,824,366 > - All imports since then give, correctly, count(*) = 6,802,367 > > This time, I FTP'd the file to my mac laptop, which is running 8.2.3. > The import gave me the correct count every time. > Remembering the suggestion of hardware problems, I made an empty > install of 8.2.3 on my server. On it, the data file was fully imported > every single time. > I decided to copy all my data to 8.2.3 to test dependency of the bug > on a) size of the database (30GB of data) or b) some problem with data > or some stored procedure. I've tried the import, and again, it's > correct every single time. > > During the import of the database data to 8.2.3, I got mostly a clean > set, except for the 3 error messages listed below which suggest a > buggy kernel. > > A bunch of questions: > 1) Has anybody seen this type of behavior in 8.1.*, is there a known > bug that might explain the problems in both the Linux server and the > Windows box? > 2) Those errors on the database import are troubling, but the affected > only 3 tables. Wouldn't a buggy kernel give more trouble? Is it > possible that this was a bug in 8.1.3's pg_dump? Also, since the > database is so big, the output of pg_dump is split'd and bzip2'd, so > there's room for error there too. > 3) On Friday I'm going to upgrade the production database from 8.1.3 > to 8.2.3. Any caveats or words of advice? > > Thank you, > Jaime > > > ERROR: unexpected data beyond EOF in block 23662 of relation "portfolio" > HINT: This has been seen to occur with buggy kernels; consider > updating your system. > CONTEXT: COPY portfolio, line 3426949: "210395 1974263 1 723" > STATEMENT: COPY portfolio (deal_id, security_id, amount, > portfolio_version) FROM stdin; > ERROR: unexpected data beyond EOF in block 4028 of relation > "coverage_test_val" > HINT: This has been seen to occur with buggy kernels; consider > updating your system. > CONTEXT: COPY coverage_test_val, line 338035: "340676 588 > 2006-08-25 STEPUP_TRIGGER2 6 0.0 0.0 \ > 7" > STATEMENT: COPY coverage_test_val (coverage_test_val_id, deal_id, > observation_date, coverage_test_name, coverage_test\ > _priority, coverage_test_value, coverage_test_trigger, > coverage_test_type_id) FROM stdin; > ERROR: unexpected data beyond EOF in block 4049 of relation > "deal_current_val" > HINT: This has been seen to occur with buggy kernels; consider > updating your system. > CONTEXT: COPY deal_current_val, line 511050: "612884 7008 > 2005-09-21 23 1.957871" > STATEMENT: COPY deal_current_val (deal_current_val_id, deal_id, > observation_date, type_id, deal_current_val) FROM std\ > in; > > *********************************************************************** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. Bear Stearns does not provide tax, legal or accounting advice. You should consult your own tax, legal and accounting advisors before engaging in any transaction. In order for Bear Stearns to comply with Internal Revenue Service Circular 230 (if applicable), you are notified that any discussion of U.S. federal tax issues contained or referred to herein is not intended or written to be used, and cannot be used, for the purpose of: (A) avoiding penalties that may be imposed under the Internal Revenue Code; nor (B) promoting, marketing or recommending to another party any transaction or matter addressed herein. ***********************************************************************
" ...StringInfos aren't a toastable datatype....." Ok, thanks. This was the issue. Now, I can load all data. Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Monday, March 26, 2007 6:26 PM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Insert fail: could not open relation with OID 3221204992 "Ale Raza" <araza@esri.com> writes: > Same results with -O0 option. Hmmm ... I guess I should have stopped to read the code a little closer: >> Breakpoint 1, ST_Geometry_Recv (fcinfo=0xbfffcb90) at binary.c:138 >> 138 StringInfo src_buf = (StringInfo) > PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); Where did you get that from? Receive functions are supposed to do this: StringInfo buf = (StringInfo) PG_GETARG_POINTER(0); StringInfos aren't a toastable datatype. The loss of control probably represents the toast code throwing an error ... regards, tom lane
Hi All, I am looking for the kind code for STATISTIC_KIND GEOMETRY to calculate the selectivity/stats for geometry type (st_geometry). According to PostgreSQL (pg_statistic.h) * The present allocation of "kind" codes is: * * 1-99: reserved for assignment by the core PostgreSQL project * (values in this range will be documented in this file) * 100-199: reserved for assignment by the PostGIS project * (values to be documented in PostGIS documentation) * 200-9999: reserved for future public assignments * * For private use you may choose a "kind" code at random in the range * 10000-30000. However, for code that is to be widely disseminated it is * better to obtain a publicly defined "kind" code by request from the * PostgreSQL Global Development Group. */ Wondering where I can find the "kind" codes for this new st_geometry type? Thanks. Ale Raza
araza@esri.com writes: > I am looking for the kind code for STATISTIC_KIND GEOMETRY to calculate > the selectivity/stats for geometry type (st_geometry). Presumably, the PostGIS people would be the ones to ask. regards, tom lane
Tom, It is mentioned in postgresql-8.2.3\src\include\catalog\pg_statistic.h file that the values between 100-199 are reserved for assignment by the PostGIS project. Is PostgreSQL reserving these values? Do I did to reserve values like PotGIS is doing? Ale. -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, April 25, 2007 11:27 AM To: Ale Raza Cc: postgresql-general Subject: Re: [GENERAL] Where to find kind code for STATISTIC_KIND GEOMETRY? araza@esri.com writes: > I am looking for the kind code for STATISTIC_KIND GEOMETRY to calculate > the selectivity/stats for geometry type (st_geometry). Presumably, the PostGIS people would be the ones to ask. regards, tom lane
On Wed, 2007-04-25 at 17:09 -0700, araza@esri.com wrote: > It is mentioned in postgresql-8.2.3\src\include\catalog\pg_statistic.h > file that the values between 100-199 are reserved for assignment by the > PostGIS project. Is PostgreSQL reserving these values? Do I did to > reserve values like PotGIS is > doing? You just need to ask. We're keen to help everyone integrate. Code related issues are usually discussed on pgsql-hackers. It's possible that there's a requirements overlap between ESRI and PostGIS, so it would be better if you could discuss it between you so we don't have to reserve a second range of numbers for identical purposes. I do understand there may be some licencing issues there. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
DEBUG_GEOMETRY_STATS is specific to POSTGIS Geographic Implementation System package..Have you tried their discussion group at postgis-subscribe@yahoogroups.com M-- This email message and any files transmitted with it contain confidential information intended only for the person(s) to whom this email message is addressed. If you have received this email message in error, please notify the sender immediately by telephone or email and destroy the original message without making a copy. Thank you. ----- Original Message ----- From: <araza@esri.com> To: "postgresql-general" <pgsql-general@postgresql.org> Sent: Wednesday, April 25, 2007 2:02 PM Subject: [GENERAL] Where to find kind code for STATISTIC_KIND GEOMETRY? Hi All, I am looking for the kind code for STATISTIC_KIND GEOMETRY to calculate the selectivity/stats for geometry type (st_geometry). According to PostgreSQL (pg_statistic.h) * The present allocation of "kind" codes is: * * 1-99: reserved for assignment by the core PostgreSQL project * (values in this range will be documented in this file) * 100-199: reserved for assignment by the PostGIS project * (values to be documented in PostGIS documentation) * 200-9999: reserved for future public assignments * * For private use you may choose a "kind" code at random in the range * 10000-30000. However, for code that is to be widely disseminated it is * better to obtain a publicly defined "kind" code by request from the * PostgreSQL Global Development Group. */ Wondering where I can find the "kind" codes for this new st_geometry type? Thanks. Ale Raza ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings