Re: Fast insert, but slow join and updates for table with 4 billion rows

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: Fast insert, but slow join and updates for table with 4 billion rows
Дата
Msg-id CAOR=d=2oWn4RQ_CxyD19-P+HeMT8=dnW8GaonkTZKWt7mbCvtQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Fast insert, but slow join and updates for table with 4 billion rows  (Lars Aksel Opsahl <Lars.Opsahl@nibio.no>)
Ответы Re: Fast insert, but slow join and updates for table with 4 billion rows
Список pgsql-performance
On Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl <Lars.Opsahl@nibio.no> wrote:
> Hi
>
> Yes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a
problemwith another SQL where I only use epoch in the query. 
>
> SELECT count(o.*) FROM  met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;
>  count
> -------
>  97831
> (1 row)
> Time: 92763.389 ms
>
> To get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes
onthe same column and that eats memory when we have more than 4 billion rows. 
>
> Is it any way to avoid to two indexes on the epoch column ?

You could try reversing the order. Basically whatever comes first in a
two column index is easier / possible for postgres to use like a
single column index. If not. then you're probably stuck with two
indexes.


В списке pgsql-performance по дате отправления:

Предыдущее
От: Lars Aksel Opsahl
Дата:
Сообщение: Re: Fast insert, but slow join and updates for table with 4 billion rows
Следующее
От: Lars Aksel Opsahl
Дата:
Сообщение: Re: Fast insert, but slow join and updates for table with 4 billion rows