Re: DATA corruption after promoting slave to master

Поиск
Список
Период
Сортировка
От Shaun Thomas
Тема Re: DATA corruption after promoting slave to master
Дата
Msg-id 0683F5F5A5C7FE419A752A034B4A0B9797D9962C@sswchi5pmbx2.peak6.net
обсуждение исходный текст
Ответ на DATA corruption after promoting slave to master  (Karthik Iyer <karthik.i@directi.com>)
Ответы Re: DATA corruption after promoting slave to master  (Kirit Parmar <kirit.p@directi.com>)
Список pgsql-general
Hi Krit,

It looks like your actual problem is here:

>  Index Scan using t1_orderid_creationtime_idx on t1
>  (cost=0.43..1181104.36 rows=9879754 width=158)
>  (actual time=0.021..60830.724 rows=2416614 loops=1

This index scan estimates 9.8M rows, and had to touch 2.4M. The issue is that your LIMIT clause makes the planner
overlyoptimistic. The worst case cost estimate for this part of the query is about 1.2M, which is much higher than the
SEQSCAN variation you posted. The planner must think it can get the rows without incurring the full cost, otherwise I
can'tsee how the 1.2M cost estimate wasn't rolled into the total estimate. 

Unfortunately behavior like this is pretty common when using LIMIT clauses. Sometimes the planner thinks it can get
resultsmuch faster than it actually can, and it ends up reading a much larger portion of the data than it assumed would
benecessary. 

Just out of curiosity, Can you tell me what your default_statistics_target is?

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email


В списке pgsql-general по дате отправления:

Предыдущее
От: Rémy-Christophe Schermesser
Дата:
Сообщение: Re: Performance problem on 2 PG versions on same query
Следующее
От: Edoardo Panfili
Дата:
Сообщение: psql connection via localhost or 127.0.0.1