Re: Fast insert, but slow join and updates for table with 4 billion rows

Поиск
Список
Период
Сортировка
От Lars Aksel Opsahl
Тема Re: Fast insert, but slow join and updates for table with 4 billion rows
Дата
Msg-id 8b96ff68632e468dbab70d4f8533acbb@nibio.no
обсуждение исходный текст
Ответ на Re: Fast insert, but slow join and updates for table with 4 billion rows  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-performance
Hi

Yes that helps, I tested this on now on the first column now.

This basically means that only the first column in multiple column index may be used in single column query.

EXPLAIN  analyze SELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.point_uid_ref = 15 ;
                                                                               QUERY PLAN
                                                

------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=45540.97..45540.98 rows=1 width=42) (actual time=24.715..24.715 rows=1 loops=1)
   ->  Bitmap Heap Scan on nora_bc25_observation o  (cost=477.66..45427.40 rows=45430 width=42) (actual
time=6.436..19.006rows=43832 loops=1) 
         Recheck Cond: (point_uid_ref = 15)
         ->  Bitmap Index Scan on idx_met_vaer_wisline_nora_bc25_observation_test  (cost=0.00..466.30 rows=45430
width=0)(actual time=6.320..6.320 rows=43832 loops=1) 
               Index Cond: (point_uid_ref = 15)
 Total runtime: 24.767 ms
(6 rows)


Thanks

Lars

________________________________________
Fra: Scott Marlowe <scott.marlowe@gmail.com>
Sendt: 24. oktober 2016 22:23
Til: Lars Aksel Opsahl
Kopi: Tom Lane; pgsql-performance@postgresql.org
Emne: Re: [PERFORM] Fast insert, but slow join and updates for table with 4 billion rows

On Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl <Lars.Opsahl@nibio.no> wrote:
> Hi
>
> Yes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a
problemwith another SQL where I only use epoch in the query. 
>
> SELECT count(o.*) FROM  met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;
>  count
> -------
>  97831
> (1 row)
> Time: 92763.389 ms
>
> To get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes
onthe same column and that eats memory when we have more than 4 billion rows. 
>
> Is it any way to avoid to two indexes on the epoch column ?

You could try reversing the order. Basically whatever comes first in a
two column index is easier / possible for postgres to use like a
single column index. If not. then you're probably stuck with two
indexes.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Fast insert, but slow join and updates for table with 4 billion rows
Следующее
От: Lars Aksel Opsahl
Дата:
Сообщение: Re: Fast insert, but slow join and updates for table with 4 billion rows