Re: How to do faster DML

Поиск
Список
Период
Сортировка
От Greg Sabino Mullane
Тема Re: How to do faster DML
Дата
Msg-id CAKAnmmLcTS7zpeZ3m7mkJxGsu+f+Tv9bp5KV3T52Q+c8DMTm+A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: How to do faster DML  (veem v <veema0000@gmail.com>)
Ответы Re: How to do faster DML
Re: How to do faster DML
Список pgsql-general
On Mon, Feb 12, 2024 at 1:50 AM veem v <veema0000@gmail.com> wrote:
So we were thinking, adding many column to a table should be fine in postgres (as here we have a use case in which total number of columns may go till ~500+). But then,  considering the access of columns towards the end of a row is going to add more time as compared to the column which is at the start of the row. As you mentioned, accessing 100th column may add 4 to 5 times more as compared to the access of first column. So , is it advisable here to go for similar approach of breaking the table into two , if the total number of column reaches certain number/threshold for a table?

I'm not sure of what Peter was testing exactly to get those 4-5x figures, but I presume that is column access time, which would not mean a direct effect on your total query time of 4-5x. As far as breaking the table in two, I would not recommend that as it adds complexity and introduces other problems. On the other hand, 500 columns is pretty extreme, so maybe things rarely accessed or searched for could go in another table. Really hard to say without knowing more about your data access patterns. Hopefully, we never see a "SELECT *" for a table like that! :)

Cheers,
Greg

В списке pgsql-general по дате отправления:

Предыдущее
От: Greg Sabino Mullane
Дата:
Сообщение: Re: How to do faster DML
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Query hangs (and then timeout) after using COPY to import data