Re: Problem Designing Index

Поиск
Список
Период
Сортировка
От Bill Moran
Тема Re: Problem Designing Index
Дата
Msg-id 20070813083623.14c112cc.wmoran@potentialtech.com
обсуждение исходный текст
Ответ на Problem Designing Index  (Alan J Batsford <AJBatsford@uss.com>)
Ответы Re: Problem Designing Index
Список pgsql-general
In response to Alan J Batsford <AJBatsford@uss.com>:
>
> Hello,
>
> I'm doing some select statements on my table that look like:
> SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp >
> '2007-07-18 21:29:57' OR prod_num > '1234567' ORDER BY prod_num ASC,
> transaction_timestamp ASC LIMIT 1;
>
> I've added two indices one for prod_num and another transaction_timestamp.
> This table has 151,000 rows and the above statement returns in less than a
> millisecond. If I change the above statement from '>' to '<' it takes 8
> seconds to complete. Prod_num '1234567' is towards the end of the 151k
> rows. If i use a prod_num like '0000123' towards the front the problem is
> reversed with '>' and '<'.
>
> I tried adding a third index that uses both prod_num and
> transaction_timestamp. The average performance at each end of the data for
> both '>' and '<' improved but the problem wasn't resolved. Selects at the
> end of the data with '>' conditions (Like the original statement) then
> become broken and take 500 ms to finish, which is unacceptable for the
> application.
>
> I did analyze on the table with no effect.
>
> Is it possible to design an index that can account for all the scenerios?
> Thanks for any help you can provide.

While it's difficult to be sure, I'm guessing you have either a hardware
problem, or a tuning problem -- but I don't think your indexes are a problem.

Keep in mind that once PostgreSQL has determined which rows to return, it
has to actually read all those rows off disk and send them to the client
application.  In my opinion, 8 seconds to read in over 100,000 rows isn't
unreasonable (especially if those rows are wide).

If 8 seconds is an unacceptable time, then you're liable to need hardware to
fix it: more RAM to cache those rows, or faster disks or both.

However, this is just speculation.  You didn't provide analyze output, table
schema, hardware details, or configuration information ... so it's entirely
possible that there is something else wrong.  I'm just making an educated
guess.

--
Bill Moran
http://www.potentialtech.com

В списке pgsql-general по дате отправления:

Предыдущее
От: Martijn van Oosterhout
Дата:
Сообщение: Re: UDFs
Следующее
От: "Lim Berger"
Дата:
Сообщение: Re: "Out of memory" errors..