Re: Why could different data in a table be processed with different performance?

Поиск
Список
Период
Сортировка
От Vladimir Ryabtsev
Тема Re: Why could different data in a table be processed with different performance?
Дата
Msg-id CAMqTPqmX_GxkMDTcJgG2_bsy+zk3JYVrzSNdvPBgyg+zABtCVw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Why could different data in a table be processed with differentperformance?  (Fabio Pardi <f.pardi@portavita.eu>)
Список pgsql-performance
FYI, posting an intermediate update on the issue.

I disabled index scans to keep existing order, and copied part of the "slow" range into another table (3M rows in 2.2 GB table + 17 GB toast). I was able to reproduce slow readings from this copy. Then I performed CLUSTER of the copy using PK and everything improved significantly. Overall time became 6 times faster with disk read speed (reported by iotop) 30-60MB/s.

I think we can take bad physical data distribution as the main hypothesis of the issue. I was not able to launch seekwatcher though (it does not work out of the box in Ubuntu and I failed to rebuild it) and confirm lots of seeks.

I still don't have enough disk space to solve the problem with original table, I am waiting for this from admin/devops team.

My plan is to partition the original table and CLUSTER every partition on primary key once I have space.

Best regards,
Vlad

В списке pgsql-performance по дате отправления:

Предыдущее
От: Justin Pryzby
Дата:
Сообщение: Re: Partial index plan/cardinality costing
Следующее
От: "Sam R."
Дата:
Сообщение: One big table or split data? Writing data. From disk point of view.With a good storage (GBs/s, writing speed)