Re: Adding REPACK [concurrently]
| От | Álvaro Herrera |
|---|---|
| Тема | Re: Adding REPACK [concurrently] |
| Дата | |
| Msg-id | 202512041802.gzsr3v644a5l@alvherre.pgsql обсуждение исходный текст |
| Ответ на | Re: Adding REPACK [concurrently] (Marcos Pegoraro <marcos@f10.com.br>) |
| Список | pgsql-hackers |
Hello, On 2025-Dec-04, Marcos Pegoraro wrote: > Em qui., 4 de dez. de 2025 às 12:43, Álvaro Herrera <alvherre@alvh.no-ip.org> > escreveu: > > > So if you're trying to do this, the number of problematic pages must > > be large. > > Not necessarily. I have some tables where I like to use CLUSTER every > 2 or 3 months, to reorganize the data based on an index and > consequently load fewer pages with each call. These tables don't have > more than 2 or 3% of dead records, but they are quite disorganized > from the point of view of that index, since the inserted and updated > records don't follow the order I determined. I don't understand what does this have to do with what David was proposing. I mean, you're right: if all you want is to CLUSTER, you may not have an enormous number of pages to get rid of. But how can you use the technique he proposes to deal with reordering tuples? If you just move the tuples from the end of the table to where some random hole has appeared, you've not clustered the table at all. -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ "People get annoyed when you try to debug them." (Larry Wall)
В списке pgsql-hackers по дате отправления: