problem creating index in 6,5,3
От | Karl DeBisschop |
---|---|
Тема | problem creating index in 6,5,3 |
Дата | |
Msg-id | 199912171741.MAA22926@skillet.infoplease.com обсуждение исходный текст |
Ответы |
Re: [BUGS] problem creating index in 6,5,3
(Tom Lane <tgl@sss.pgh.pa.us>)
|
Список | pgsql-bugs |
If PostgreSQL failed to compile on your computer or you found a bug that is likely to be specific to one platform then please fill out this form and e-mail it to pgsql-ports@postgresql.org. To report any other bug, fill out the form below and e-mail it to pgsql-bugs@postgresql.org. If you not only found the problem but solved it and generated a patch then e-mail it to pgsql-patches@postgresql.org instead. Please use the command "diff -c" to generate the patch. You may also enter a bug report at http://www.postgresql.org/ instead of e-mail-ing this form. ============================================================================ POSTGRESQL BUG REPORT TEMPLATE ============================================================================ Your name : Karl DeBisschop Your email address : kdebisschop@alert.infoplease.com System Configuration --------------------- Architecture (example: Intel Pentium) : Intel Pentium III 450MHz x2 SMP Operating System (example: Linux 2.0.26 ELF) : Linux version 2.2.7-1.23smp (root@jiangsup.var.com) (gcc version egcs-2.91.6619990314/Linux (egcs-1.1.2 release)) #1 SMP Thu Jul 8 15:23:01 PDT 1999 PostgreSQL version (example: PostgreSQL-6.5.2): PostgreSQL-6.5.3 (from postgresql-6.5.3-1.i386.rpm distributed by www.POstgreSQL.org) Compiler used (example: gcc 2.8.0) : Please enter a FULL description of your problem: ------------------------------------------------ Cannot create index. Database consists of: +------------------+----------------------------------+----------+ | webadmin | abusers | table | | kdebisschop | daily | table | | webadmin | monthly | table | | postgres | q11999 | table | | kdebisschop | q21999 | table | | postgres | q41998 | table | | webadmin | users_into_db_temp | table | +------------------+----------------------------------+----------+ Table = daily (also q11999,q21999,q41998,users_into_db_temp,abusers) +----------------------------------+----------------------------------+-------+ | Field | Type | Length| +----------------------------------+----------------------------------+-------+ | n | int4 | 4 | | date | date | 4 | | cookie | char() | 1 | | id | text | var | +----------------------------------+----------------------------------+-------+ Table = monthly +----------------------------------+----------------------------------+-------+ | Field | Type | Length| +----------------------------------+----------------------------------+-------+ | month | date | 4 | | totalusers | int4 | 4 | | newusers | int4 | 4 | +----------------------------------+----------------------------------+-------+ Table 'daily' has 10,122,805 tuples and consumes 872,333,312 bytes (One part of trying to resolve this has been to move data into the q[1-4](199n) tables - daily was formerly split into two files - but this has not seemed to help) The problem manifests itself as webusers=> CREATE INDEX zdaily_id ON daily (id); pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. We have lost the connection to the backend, so further processing is impossible. Terminating. It runs for probably a little less than an hour then dies. I have set debug to level 3 (I was unable to find documentaion on what levels are available) and on message was printed to the log regarding the cause of the failure. (That is, I grepped the log for 'daily' and 'webusers' - the only occurence was connecting and issuing the query - no mention after that) the disk has plenty of space: $ df -k . Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda7 6123224 3312116 2495032 57% /disk/1 ==> 2,554,912,768 bytes During index generation, it looks like another 1,101,922,304 bytes of temp sort filespace is consumed, which still leave over 1.5GB free. I tried to take snapshots of the index space as it cycled through each pass, in hope that I could find out a little about what was going on when it died. This last snapshot is from just before the backend terminated: [postgres@sterno data]$ df -k base/webusers;ls -l base/webusers/pg_sorttemp21771.* base/webusers/z* Filesystem 1k-blocks Used Available Use% Mounted on /dev/sda7 6123224 4187180 1619968 72% /disk/1 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.0 -rw------- 1 postgres postgres 78675968 Dec 17 12:27 base/webusers/pg_sorttemp21771.1 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.10 -rw------- 1 postgres postgres 28639232 Dec 17 12:29 base/webusers/pg_sorttemp21771.11 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.12 -rw------- 1 postgres postgres 0 Dec 17 12:26 base/webusers/pg_sorttemp21771.13 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.2 -rw------- 1 postgres postgres 78675968 Dec 17 12:27 base/webusers/pg_sorttemp21771.3 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.4 -rw------- 1 postgres postgres 78708736 Dec 17 12:28 base/webusers/pg_sorttemp21771.5 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.6 -rw------- 1 postgres postgres 78675968 Dec 17 12:29 base/webusers/pg_sorttemp21771.7 -rw------- 1 postgres postgres 0 Dec 17 12:29 base/webusers/pg_sorttemp21771.8 -rw------- 1 postgres postgres 78708736 Dec 17 12:29 base/webusers/pg_sorttemp21771.9 -rw------- 1 postgres postgres 472956928 Dec 17 12:33 base/webusers/zdaily_id This did work for us using 6.5.1 compiled in house. Our problems started with the 6.5.3 RPM. Other than taking the dump of this data and copying in into a new database, I have found no way to reproduce this problem. This is the largest single table we have in production. It does happen with a fresh copy into a totally new database, so I think the problem is in the DBMS. Please describe a way to repeat the problem. Please try to provide a concise reproducible example, if at all possible: ---------------------------------------------------------------------- If you know how this problem might be fixed, list the solution below: ---------------------------------------------------------------------
В списке pgsql-bugs по дате отправления: