Delete huge Table under XFS

Поиск
Список
Период
Сортировка
От Joao Junior
Тема Delete huge Table under XFS
Дата
Msg-id CABnPa_h2f5342mF6yM9pAR1=nV24hKwiBAXhmiS9zOEKmvUQAg@mail.gmail.com
обсуждение исходный текст
Ответы Re: Delete huge Table under XFS  (Andreas Kretschmer <andreas@a-kretschmer.de>)
Список pgsql-performance
Hi, 
I am running Postgresql 9.6  XFS as filesystem , kernel Linux 2.6.32.

I have a table that Is not being use anymore, I want to drop it.
The table is huge, around 800GB and it has some index on it.

When I execute the drop table command it goes very slow, I realised that the problem is the filesystem.
It seems that XFS doesn't handle well big files, there are some discussion about it in some lists.

I have to find a way do delete the table in chunks.

My first attempt was:

Iterate from the tail of the table until the beginning.
Delete some blocks of the table.
Run vacuum on it
iterate again....

The plan is delete some amount of blocks at the end of the table, in chunks of some size and vacuum it  waiting  for vacuum shrink the table.
it seems work, the table has  been shrink but each vacuum takes a huge amount of time, I suppose it is because of the index. there is another point, the index still huge and will be.

I am thinking of another way of doing this.
I can get  the relfilenode of the table, in this way I can get the files that belongs to the table and simply delete batches of files in a way that don't put so much load on disk.
Do the same for the index.
Once I delete all table's files and index's files, I could simply execute the command drop table and the entries from the catalog would deleted.

I would appreciate any kind of comments.
thanks!




В списке pgsql-performance по дате отправления:

Предыдущее
От: Mariel Cherkassky
Дата:
Сообщение: comparing output of internal pg tables of referenced tables
Следующее
От: Tom Lane
Дата:
Сообщение: Re: comparing output of internal pg tables of referenced tables