Re: BUG #17266: could not truncate file "base/126370/130666" to 0 blocks: Permission denied

Поиск
Список
Период
Сортировка
От Kyotaro Horiguchi
Тема Re: BUG #17266: could not truncate file "base/126370/130666" to 0 blocks: Permission denied
Дата
Msg-id 20211105.150708.133689831588998978.horikyota.ntt@gmail.com
обсуждение исходный текст
Ответ на BUG #17266: could not truncate file "base/126370/130666" to 0 blocks: Permission denied  (PG Bug reporting form <noreply@postgresql.org>)
Список pgsql-bugs
At Wed, 03 Nov 2021 06:57:00 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in 
> The following bug has been logged on the website:
> 
> Bug reference:      17266
> Logged by:          Adam Santiaji
> Email address:      adam@drose-consulting.com
> PostgreSQL version: 12.0
> Operating system:   Window Server 2016 Standard
> Description:        
> 
> from log postgreSQL i found this error log from 2021-11-01
> 2021-11-01 08:55:47.110 +07 [6860] ERROR:  could not truncate file
> "base/126370/130666" to 0 blocks: Permission denied
> 2021-11-01 08:55:47.110 +07 [6860] CONTEXT:  automatic vacuum of table
> "ykkap.public.imw_t_thread"
> 2021-11-01 08:58:47.124 +07 [9604] ERROR:  could not truncate file
> "base/126370/127251" to 0 blocks: Permission denied
> 2021-11-01 08:58:47.124 +07 [9604] CONTEXT:  automatic vacuum of table
> "ykkap.public.im_async_task_info"
> 
> and when i try to backup database on 2021-11-02 it show error
> 2021-11-02 21:21:36.970 +07 [10252] ERROR:  missing chunk number 0 for toast
> value 250117 in pg_toast_127251
> 
> what does it mean Permission denied? is it cause of permission access from
> windows?
> how to avoid this error log in future?

It's most likely that the truncate was competing with antivirus
software on the files.  If that's the case excluding the database
directory in your antivirus setting would work and it is highly
recommended in any case. If that's the case this is not a bug but a
known (unwanted) behavior.

The second error seems to be the consequence of the first
failure. Namely the first failure left a broken toast pointer in table
im_async_task_info(127251) to a toast tuple in pg_toast_127251, which
should have been succsssfully removed. In short the database is
broken.

If the table has no significant data or no rows have been inserted
into it since the first failure, dropping the table then re-creating
it would work.  If it is sufficient that the database can be restored
to the state before the first error occured, recoverying to that time
will also work.

Otherwise maybe you would need to find the rows that cause the error
then copy the other rows to a new table for replacement.  You may be
able to utilize foreign constraints or other things to find them. Then
check the consistency in the data of the new table and replace the old
table with it. There might be a nicer way but I don't know of.

In all ways other than recovery, it is recommended to dump (not by
pg_basebackup) the all data then restore to a newly created database.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



В списке pgsql-bugs по дате отправления:

Предыдущее
От: Semab Tariq
Дата:
Сообщение: Re: CREATE INDEX CONCURRENTLY does not index prepared xact's data
Следующее
От: Kyotaro Horiguchi
Дата:
Сообщение: Re: BUG #17269: Why is virtual memory usage of PostgreSQL growing constantly?