FW: pg_clog error
От | terry@greatgulfhomes.com |
---|---|
Тема | FW: pg_clog error |
Дата | |
Msg-id | 002b01c233d8$94ece1e0$2766f30a@development.greatgulfhomes.com обсуждение исходный текст |
Список | pgsql-general |
Just a note to anyone that can shed light on the issue: I reran the script and it finished without errors (other then the one about the index that I had deleted was not present, which was expected because none of the commands after the error worked because the backend was restarting, and the create index was one of them) Any ideas anyone? BTW I have many gigs of HD space left on the partition where the data dir resides. Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes terry@greatgulfhomes.com -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of terry@greatgulfhomes.com Sent: Thursday, July 25, 2002 8:16 AM To: Postgres (E-mail) Subject: [GENERAL] pg_clog error Every night I pull data from a legacy system. Last night for the first time I got the error message: FATAL 2: open of /usr/local/pgsql/data/pg_clog/0081 failed: No such file or directory server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. connection to server was lost In the script, imports into 2 other tables, both quite large (>300k tuples) completed successfully. I look in the directory in question, and it is there, but the file 0081 is not, just 0000. This of course causes the rest of the actions in the script to fail as the backend is restarting. The script has never had this problem before that I have noticed, and I confirmed that it did not happen last night. Does anyone know what causes this? Do I need to increase the number of file handles somewhere? Detailed snippet of the occurrence is below: <snip> psql -c 'CREATE INDEX "customer_extra_budget_rm_idx" on "customer_extra_budget" using btree ( "division_id" "bpchar_ops", "elevation" "bpchar_ops", "model_id" "bpchar_ops", "option_code" "bpchar_ops", "project_id" "bpchar_ops", "room_id" "bpchar_ops" );' -d devtest2 CREATE psql -c "DROP INDEX customer_extra_costs_idx; DROP INDEX customer_extra_costs_ct_idx" -d devtest2 DROP psql -c "delete from customer_extra_costs where division_id ='GGH';" -d devtest2 FATAL 2: open of /usr/local/pgsql/data/pg_clog/0081 failed: No such file or directory server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. connection to server was lost Thanks Terry Fielder Network Engineer Great Gulf Homes / Ashton Woods Homes terry@greatgulfhomes.com ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
В списке pgsql-general по дате отправления: