Обсуждение: Problem creating a database

Поиск
Список
Период
Сортировка

Problem creating a database

От
Joshua White
Дата:
Hi all,

I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I have full admin rights on this machine, so I can access logs, etc.

Recently I attempted to create a new database in this cluster. The command succeeds, but when I try to connect to the new database, I get a "could not open file" error:

psql: FATAL:  could not open file "base/618720/2610": No such file or directory

It has been some time since I set up the database, so I don't know how long ago this became an issue. I can't seem to find any other instances of this problem online either. The logs are not helpful - even on the highest debug setting, I only see the "connection authorized" then the fatal "could not open file" error.

The data directory is on a separate disk array to the OS. Recently checked it and there are no disk errors.

Any thoughts or ideas would be much appreciated.

Kind Regards,
Joshua

Re: Problem creating a database

От
Laurenz Albe
Дата:
Joshua White wrote:
> I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server
> instance on CentOS 6, which I set up and manage. I have full admin rights on this machine,
> so I can access logs, etc.
> 
> Recently I attempted to create a new database in this cluster. The command succeeds,
> but when I try to connect to the new database, I get a "could not open file" error:
> 
> psql: FATAL:  could not open file "base/618720/2610": No such file or directory
> 
> It has been some time since I set up the database, so I don't know how long ago this
> became an issue. I can't seem to find any other instances of this problem online either.
> The logs are not helpful - even on the highest debug setting, I only see the
> "connection authorized" then the fatal "could not open file" error.
> 
> The data directory is on a separate disk array to the OS. Recently checked it and
> there are no disk errors.
> 
> Any thoughts or ideas would be much appreciated.

Looks like the file backing the "pg_index" table is gone.

Can you check if the file exists in the data directory or not?

It's hard to determine what happened, but something has been
eating your data.  As it is, your best option would be to
drop the database and recreate it from a backup.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Problem creating a database

От
Ben Madin
Дата:
Do you have adequate disk space left on your array?

cheers

Ben


On 15 October 2018 at 17:46, Joshua White <joshua.white@monash.edu> wrote:
Hi all,

I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I have full admin rights on this machine, so I can access logs, etc.

Recently I attempted to create a new database in this cluster. The command succeeds, but when I try to connect to the new database, I get a "could not open file" error:

psql: FATAL:  could not open file "base/618720/2610": No such file or directory

It has been some time since I set up the database, so I don't know how long ago this became an issue. I can't seem to find any other instances of this problem online either. The logs are not helpful - even on the highest debug setting, I only see the "connection authorized" then the fatal "could not open file" error.

The data directory is on a separate disk array to the OS. Recently checked it and there are no disk errors.

Any thoughts or ideas would be much appreciated.

Kind Regards,
Joshua




--

Ausvet Logo

Dr Ben Madin
 
BVMS MVPHMgmt PhD MANZCVS GAICD
Managing Director
Skype: benmadin
Address: 
5 Shuffrey Street
Fremantle, WA 6160
Australia

Re: Problem creating a database

От
Joshua White
Дата:
Thanks for the suggestion - plenty of disk space left (several hundred gigabytes free).

Kind Regards,
Joshua White

On Tue, 16 Oct 2018 at 15:03, Ben Madin <ben@ausvet.com.au> wrote:
Do you have adequate disk space left on your array?

cheers

Ben


On 15 October 2018 at 17:46, Joshua White <joshua.white@monash.edu> wrote:
Hi all,

I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I have full admin rights on this machine, so I can access logs, etc.

Recently I attempted to create a new database in this cluster. The command succeeds, but when I try to connect to the new database, I get a "could not open file" error:

psql: FATAL:  could not open file "base/618720/2610": No such file or directory

It has been some time since I set up the database, so I don't know how long ago this became an issue. I can't seem to find any other instances of this problem online either. The logs are not helpful - even on the highest debug setting, I only see the "connection authorized" then the fatal "could not open file" error.

The data directory is on a separate disk array to the OS. Recently checked it and there are no disk errors.

Any thoughts or ideas would be much appreciated.

Kind Regards,
Joshua




--

Ausvet Logo

Dr Ben Madin
 
BVMS MVPHMgmt PhD MANZCVS GAICD
Managing Director
Skype: benmadin
Address: 
5 Shuffrey Street
Fremantle, WA 6160
Australia

Re: Problem creating a database

От
Joshua White
Дата:
Thanks for the tip. I've checked and the on-disk file behind "pg_index" still exists.

I have existing databases in this cluster that I'd prefer not to drop and recreate if possible.

I've tried dropping and recreating the new database I want to use, but each time get the same type of error.

Kind Regards,
Joshua White

On Mon, 15 Oct 2018 at 21:13, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
Joshua White wrote:
> I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server
> instance on CentOS 6, which I set up and manage. I have full admin rights on this machine,
> so I can access logs, etc.
>
> Recently I attempted to create a new database in this cluster. The command succeeds,
> but when I try to connect to the new database, I get a "could not open file" error:
>
> psql: FATAL:  could not open file "base/618720/2610": No such file or directory
>
> It has been some time since I set up the database, so I don't know how long ago this
> became an issue. I can't seem to find any other instances of this problem online either.
> The logs are not helpful - even on the highest debug setting, I only see the
> "connection authorized" then the fatal "could not open file" error.
>
> The data directory is on a separate disk array to the OS. Recently checked it and
> there are no disk errors.
>
> Any thoughts or ideas would be much appreciated.

Looks like the file backing the "pg_index" table is gone.

Can you check if the file exists in the data directory or not?

It's hard to determine what happened, but something has been
eating your data.  As it is, your best option would be to
drop the database and recreate it from a backup.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

Re: Problem creating a database

От
Laurenz Albe
Дата:
Please don't top post.

Joshua White wrote:
> On Mon, 15 Oct 2018 at 21:13, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
> > Joshua White wrote:
> > > I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server
> > > instance on CentOS 6, which I set up and manage. I have full admin rights on this machine,
> > > so I can access logs, etc.
> > > 
> > > Recently I attempted to create a new database in this cluster. The command succeeds,
> > > but when I try to connect to the new database, I get a "could not open file" error:
> > > 
> > > psql: FATAL:  could not open file "base/618720/2610": No such file or directory
> > > 
> > > It has been some time since I set up the database, so I don't know how long ago this
> > > became an issue. I can't seem to find any other instances of this problem online either.
> > > The logs are not helpful - even on the highest debug setting, I only see the
> > > "connection authorized" then the fatal "could not open file" error.
> > > 
> > > The data directory is on a separate disk array to the OS. Recently checked it and
> > > there are no disk errors.
> > > 
> > > Any thoughts or ideas would be much appreciated.
> > 
> > Looks like the file backing the "pg_index" table is gone.
> > 
> > Can you check if the file exists in the data directory or not?
>
> Thanks for the tip. I've checked and the on-disk file behind "pg_index" still exists.
>
> I have existing databases in this cluster that I'd prefer not to drop and recreate if possible.
>
> I've tried dropping and recreating the new database I want to use, but each time get the same type of error.

"pg_index" initially uses file 2610.
That may of yourse change if you rewrite the table.

Try the following as OS user "postgres":
   oid2name -d <your database> -f 2610
Then you can see which table is associated to that file.

Anyway, your database seems to be quite wrecked, and you'd probably need
an expert to save what can be saved.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Problem creating a database

От
Colin Coles
Дата:

On 15/10/2018 10:46, Joshua White wrote:
> Hi all,
>
> I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
> have full admin rights on this machine, so I can access logs, etc.
>
> Recently I attempted to create a new database in this cluster. The command
> succeeds, but when I try to connect to the new database, I get a "could not
> open file" error:
>
> psql: FATAL:  could not open file "base/618720/2610": No such file or
> directory
>
> It has been some time since I set up the database, so I don't know how long
> ago this became an issue. I can't seem to find any other instances of this
> problem online either. The logs are not helpful - even on the highest debug
> setting, I only see the "connection authorized" then the fatal "could not
> open file" error.
>
> The data directory is on a separate disk array to the OS. Recently checked
> it and there are no disk errors.
>
> Any thoughts or ideas would be much appreciated.
>
> Kind Regards,
> Joshua

Hi Joshua,
  I have a distant recollection of a similar problem on CentOS, solved 
by raising the mount point further up the fs hierarchy. i.e. I think I 
ended up having to have the disk partition mounted on /var/lib before it 
would work properly, after initially trying it on /var/lib/pgsql/data. 
If you're using 10 your path will probably be different, but I think it 
boiled down to the ownership of the underlying mount point.

Cheers,
Colin.


Re: Problem creating a database

От
Joshua White
Дата:
On Tue, 16 Oct 2018 at 18:58, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
Please don't top post.

Joshua White wrote:
> On Mon, 15 Oct 2018 at 21:13, Laurenz Albe <laurenz.albe@cybertec.at> wrote:
> > Joshua White wrote:
> > > I'm hoping someone can point me in the right direction. I've got a PostgreSQL 10 server
> > > instance on CentOS 6, which I set up and manage. I have full admin rights on this machine,
> > > so I can access logs, etc.
> > >
> > > Recently I attempted to create a new database in this cluster. The command succeeds,
> > > but when I try to connect to the new database, I get a "could not open file" error:
> > >
> > > psql: FATAL:  could not open file "base/618720/2610": No such file or directory
> > >
> > > It has been some time since I set up the database, so I don't know how long ago this
> > > became an issue. I can't seem to find any other instances of this problem online either.
> > > The logs are not helpful - even on the highest debug setting, I only see the
> > > "connection authorized" then the fatal "could not open file" error.
> > >
> > > The data directory is on a separate disk array to the OS. Recently checked it and
> > > there are no disk errors.
> > >
> > > Any thoughts or ideas would be much appreciated.
> >
> > Looks like the file backing the "pg_index" table is gone.
> >
> > Can you check if the file exists in the data directory or not?
>
> Thanks for the tip. I've checked and the on-disk file behind "pg_index" still exists.
>
> I have existing databases in this cluster that I'd prefer not to drop and recreate if possible.
>
> I've tried dropping and recreating the new database I want to use, but each time get the same type of error.

"pg_index" initially uses file 2610.
That may of yourse change if you rewrite the table.

Try the following as OS user "postgres":
   oid2name -d <your database> -f 2610
Then you can see which table is associated to that file.

Anyway, your database seems to be quite wrecked, and you'd probably need
an expert to save what can be saved.

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com


Sorry about the top-posting - default behaviour of my email client.

./oid2name -d postgres -f 2610 returns no tables.

Four of my six custom databases within the cluster return "pg_index" as the table. The other two return nothing. 

I guess this is fast becoming a situation where I'd be better off to start over and restore the entire cluster from backups. Not ideal, but might have to be done. At least I can take the opportunity to replace CentOS 6 with 7 if I do that.

Kind Regards,
Joshua.

Re: Problem creating a database

От
Joshua White
Дата:
On Tue, 16 Oct 2018 at 19:13, Colin Coles <mailinglists@beachcentral.co.uk> wrote:
On 15/10/2018 10:46, Joshua White wrote:
> Hi all,
>
> I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
> have full admin rights on this machine, so I can access logs, etc.
>
> Recently I attempted to create a new database in this cluster. The command
> succeeds, but when I try to connect to the new database, I get a "could not
> open file" error:
>
> psql: FATAL:  could not open file "base/618720/2610": No such file or
> directory
>
> It has been some time since I set up the database, so I don't know how long
> ago this became an issue. I can't seem to find any other instances of this
> problem online either. The logs are not helpful - even on the highest debug
> setting, I only see the "connection authorized" then the fatal "could not
> open file" error.
>
> The data directory is on a separate disk array to the OS. Recently checked
> it and there are no disk errors.
>
> Any thoughts or ideas would be much appreciated.
>
> Kind Regards,
> Joshua

Hi Joshua,
  I have a distant recollection of a similar problem on CentOS, solved
by raising the mount point further up the fs hierarchy. i.e. I think I
ended up having to have the disk partition mounted on /var/lib before it
would work properly, after initially trying it on /var/lib/pgsql/data.
If you're using 10 your path will probably be different, but I think it
boiled down to the ownership of the underlying mount point.

Cheers,
Colin.


Hi Colin,

Thanks for the tip - I'll have a look into the mount point ownership as well.

Kind Regards,
Joshua. 

Re: Problem creating a database

От
Joshua White
Дата:
> > > Recently I attempted to create a new database in this cluster. The command succeeds,
> > > but when I try to connect to the new database, I get a "could not open file" error:
> > >
> > > psql: FATAL:  could not open file "base/618720/2610": No such file or directory
> > >
> >
> > Looks like the file backing the "pg_index" table is gone.
> >
> > Can you check if the file exists in the data directory or not?
>
> Thanks for the tip. I've checked and the on-disk file behind "pg_index" still exists.
>
> I have existing databases in this cluster that I'd prefer not to drop and recreate if possible.
>
> I've tried dropping and recreating the new database I want to use, but each time get the same type of error.

"pg_index" initially uses file 2610.

Anyway, your database seems to be quite wrecked, and you'd probably need
an expert to save what can be saved.


So you're right about it being seriously corrupted somehow. All my custom databases seem to work fine. I have data checksums and amcheck enabled and haven't found any issues using the query below.

SELECT bt_index_check(index => c.oid), c.relname, c.relpages
FROM pg_index i
JOIN pg_opclass op ON i.indclass[0] = op.oid
JOIN pg_am am ON op.opcmethod = am.oid
JOIN pg_class c ON i.indexrelid = c.oid
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE am.amname = 'btree'
-- Don't check temp tables, which may be from another session:
AND c.relpersistence != 't'
AND i.indisready AND i.indisvalid
ORDER BY c.relpages DESC;

However, when I went to connect to template1, it failed:

# \connect template1
FATAL:  could not open file "base/1/2610": No such file or directory
Previous connection kept

If template1 is corrupt, that would explain why I am unable to create new databases.

Guess it's time to rebuild.

Re: Problem creating a database

От
Laurenz Albe
Дата:
Joshua White wrote:
> > > > > > psql: FATAL:  could not open file "base/618720/2610": No such file or directory
> > > 
> > > "pg_index" initially uses file 2610.
> > > 
> > > Anyway, your database seems to be quite wrecked, and you'd probably need
> > > an expert to save what can be saved.
> > 
> So you're right about it being seriously corrupted somehow. All my custom databases seem to work fine.
> I have data checksums and amcheck enabled and haven't found any issues using the query below.
> 
> SELECT bt_index_check(index => c.oid), c.relname, c.relpages
> FROM pg_index i
> JOIN pg_opclass op ON i.indclass[0] = op.oid
> JOIN pg_am am ON op.opcmethod = am.oid
> JOIN pg_class c ON i.indexrelid = c.oid
> JOIN pg_namespace n ON c.relnamespace = n.oid
> WHERE am.amname = 'btree'
> -- Don't check temp tables, which may be from another session:
> AND c.relpersistence != 't'
> AND i.indisready AND i.indisvalid
> ORDER BY c.relpages DESC;
> 
> However, when I went to connect to template1, it failed:
> 
> # \connect template1
> FATAL:  could not open file "base/1/2610": No such file or directory
> Previous connection kept
> 
> If template1 is corrupt, that would explain why I am unable to create new databases.

Right.  I think that dumping what you can and importing it in a new cluster
is the way to go.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



GIN Index for low cardinality

От
Ravi Krishna
Дата:

it is mentioned:

"GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of repeating values – think all kinds of statuses or good old Mr/Mrs/Miss. GIN only stores every unique column value only once as for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer “1” in it."


Does it mean that GIN is a very good choice for low cardinality columns.  

Re: GIN Index for low cardinality

От
Олег Самойлов
Дата:

17 окт. 2018 г., в 13:46, Ravi Krishna <srkrishna1@aol.com> написал(а):


it is mentioned:

"GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of repeating values – think all kinds of statuses or good old Mr/Mrs/Miss. GIN only stores every unique column value only once as for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer “1” in it."


Does it mean that GIN is a very good choice for low cardinality columns.  

Not necessary. There is other index which also don’t keep column value in an every leaf. Hash, for instance. 

Re: GIN Index for low cardinality

От
Ravi Krishna
Дата:
>>Does it mean that GIN is a very good choice for low cardinality columns.  
>Not necessary. There is other index which also don’t keep column value in an every leaf. Hash, for instance. 

Well I asked about GIN's usefulness for low cardinality.  Good to know that Hash can also be considered.
BTW until recently wasn't Hash unsafe for production since it was not WAL logged.



Re: GIN Index for low cardinality

От
Jeff Janes
Дата:
On Wed, Oct 17, 2018 at 6:47 AM Ravi Krishna <srkrishna1@aol.com> wrote:

it is mentioned:

"GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of repeating values – think all kinds of statuses or good old Mr/Mrs/Miss. GIN only stores every unique column value only once as for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer “1” in it."


Does it mean that GIN is a very good choice for low cardinality columns.  

For extremely low cardinality, like Mr. Mrs. Miss., I doubt any index would be very useful.  For less extreme cases, like say one million different values present around 50 times each, yes, it can be useful to keep the index size down.  It will not support needing to deliver rows in sorted order, for example to fulfill an ORDER BY or a merge join.  Think carefully about what setting you want for fast_update, and, if set to on, then what value to use for gin_pending_list_limit.

Cheers,

Jeff

Re: GIN Index for low cardinality

От
Jeff Janes
Дата:
On Thu, Oct 25, 2018 at 9:36 AM Олег Самойлов <splarv@ya.ru> wrote:

17 окт. 2018 г., в 13:46, Ravi Krishna <srkrishna1@aol.com> написал(а):


it is mentioned:

"GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of repeating values – think all kinds of statuses or good old Mr/Mrs/Miss. GIN only stores every unique column value only once as for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer “1” in it."


Does it mean that GIN is a very good choice for low cardinality columns.  

Not necessary. There is other index which also don’t keep column value in an every leaf. Hash, for instance. 

For smallish values (which low cardinality columns tend to be) the per-tuple overhead and the pointer itself is probably much larger than the value, so hash won't save you much if any space.  The GIN index removes not just the value, but the per-tuple overhead.  And also compresses the point list to further save space.  

Here is a real-world example from one of my databases where each value is about 17 characters long, and is present about 20 times:

gin: 411 MB
btree: 2167 MB
hash: 2159 MB

Cheers,

Jeff

Re: GIN Index for low cardinality

От
Ozz Nixon
Дата:
Jeff,
   Great info! Your example on Mr. Mrs. Miss, etc. is there a good rule of thumb that if the data is under "x"KB an index is overhead not help? I am not worried about space, more interested in performance.

RE: GIN Index for low cardinality

От
Igor Neyman
Дата:

 

 

From: Ozz Nixon <ozznixon@gmail.com>
Sent: Friday, October 26, 2018 12:50 PM
To: jeff.janes@gmail.com
Cc: splarv@ya.ru; srkrishna1@aol.com; pgsql-general@lists.postgresql.org
Subject: Re: GIN Index for low cardinality

 

Jeff,

   Great info! Your example on Mr. Mrs. Miss, etc. is there a good rule of thumb that if the data is under "x"KB an index is overhead not help? I am not worried about space, more interested in performance.

 

I think, partial/conditional indexes: …. Where greeting = ‘…’ – may help.

 

Regards,

Igor Neyman

Re: GIN Index for low cardinality

От
Jeff Janes
Дата:
On Fri, Oct 26, 2018 at 12:27 PM Jeff Janes <jeff.janes@gmail.com> wrote:

Here is a real-world example from one of my databases where each value is about 17 characters long, and is present about 20 times:

gin: 411 MB
btree: 2167 MB
hash: 2159 MB

For what it is worth, that was 9.6 with freshly rebuilt indexes.

On 11.0 with freshly rebuilt indices, the hash index does get a bit smaller:

gin: 411 MB
btree: 2167 MB
hash: 1864 MB

Cheers,

Jeff

Re: GIN Index for low cardinality

От
Олег Самойлов
Дата:
Eh, I checked, you are right. Something terribly wrong with hash index in PostgreSQL. But there are another hash index gin(jsonb_path_ops), may be correctly say gin+hash index. Looked like it is the best for this purpose.

26 окт. 2018 г., в 19:27, Jeff Janes <jeff.janes@gmail.com> написал(а):

On Thu, Oct 25, 2018 at 9:36 AM Олег Самойлов <splarv@ya.ru> wrote:

17 окт. 2018 г., в 13:46, Ravi Krishna <srkrishna1@aol.com> написал(а):


it is mentioned:

"GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of repeating values – think all kinds of statuses or good old Mr/Mrs/Miss. GIN only stores every unique column value only once as for the default B-tree you’ll have e.g. 1 millon leaf nodes with the integer “1” in it."


Does it mean that GIN is a very good choice for low cardinality columns.  

Not necessary. There is other index which also don’t keep column value in an every leaf. Hash, for instance. 

For smallish values (which low cardinality columns tend to be) the per-tuple overhead and the pointer itself is probably much larger than the value, so hash won't save you much if any space.  The GIN index removes not just the value, but the per-tuple overhead.  And also compresses the point list to further save space.  

Here is a real-world example from one of my databases where each value is about 17 characters long, and is present about 20 times:

gin: 411 MB
btree: 2167 MB
hash: 2159 MB

Cheers,

Jeff