Getting "could not read block" error when creating an index on a function.

Поиск
Список
Период
Сортировка
От Demitri Muna
Тема Getting "could not read block" error when creating an index on a function.
Дата
Msg-id E88B1929-C94D-45C1-B11F-0B48B39D64B7@demitri.com
обсуждение исходный текст
Ответы Re: Getting "could not read block" error when creating an index on a function.
Список pgsql-general
Hello,

I’m getting stuck on a problem I was hoping I could get some help with. I’m trying to create an index for the results
ofa function that touches two tables like this and get the following error: 

CREATE INDEX my_idx ON mytable (first_time(id));
ERROR:  could not read block 0 in file "base/16386/42868": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

Every time I run this, the last number in the block path increments by one, e.g.

ERROR:  could not read block 0 in file "base/16386/42869": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42870": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42871": read only 0 of 8192 bytes

The database is sitting on two tablespaces (pg_default and ‘data2’). When I try to use the second, I get:

CREATE INDEX my_idx ON mytable (first_time(id)) TABLESPACE data2;
ERROR:  could not read block 0 in file "pg_tblspc/17007/PG_13_202007201/16386/42870": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

with the last number similarly incrementing upon repeated attempts.

Relevant details:

* PostgreSQL version 13.1 running on Ubuntu 20.04.1 on an AWS instance using 2 x 8TB EBS storage.
* The database is ~15TB in size.
* I am not worried about data loss; the database can be considered read-only and I have all of the files needed to
recreateany table. 
* I am hoping to not recreate the whole database from scratch since doing so and creating the required indices will
takemore than a week. 
* I used these settings while importing the files to speed the process since I was not worried about data loss to
improvethe import speed (all turned back on after import): 

autovacuum = off
synchronous_commit=off
fsync = off
full_page_writes = off

* I will not do the above again. :)
* The postmaster server crashed at least twice during the process due to running out of disk space.
* Creating any number of new indices on bare columns is no problem.
* I DROPped and recreated the functions with no change.
* This statement returns no results (but maybe am looking at the wrong thing):

select n.nspname AS schema, c.relname AS realtion from pg_class c inner join pg_namespace n on (c.relnamespace = n.oid)
wherec.relfilenode = '16386’; 

From reading about this error (missing block files) it suggests I have some database corruption, which is fine as I can
easilydelete anything problematic and recreate. I’ve deleted the indices related to the function and recreated them,
butthe same error remains. Accessing the related tables seems ok, but with that much data I can’t guarantee that. I
don’tget any errors.  

Any help would be appreciated!

Cheers,
Demitri




В списке pgsql-general по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: Problem with ssl and psql in Postgresql 13
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Problem with ssl and psql in Postgresql 13