Re: Performance question 83 GB Table 150 million rows, distinct select
| От | Claudio Freire |
|---|---|
| Тема | Re: Performance question 83 GB Table 150 million rows, distinct select |
| Дата | |
| Msg-id | CAGTBQpaAEApuVB6Z7BcCVs1O-t2L3ZSLbCPyqG5LL__sjUoukA@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: Performance question 83 GB Table 150 million rows, distinct select (Aidan Van Dyk <aidan@highrise.ca>) |
| Список | pgsql-performance |
On Thu, Nov 17, 2011 at 11:17 AM, Aidan Van Dyk <aidan@highrise.ca> wrote: > But remember, you're doing all that in a single query. So your disk > subsystem might even be able to perform even more *througput* if it > was given many more concurrent request. A big raid10 is really good > at handling multiple concurrent requests. But it's pretty much > impossible to saturate a big raid array with only a single read > stream. The query uses a bitmap heap scan, which means it would benefit from a high effective_io_concurrency. What's your effective_io_concurrency setting? A good place to start setting it is the number of spindles on your array, though I usually use 1.5x that number since it gives me a little more thoughput. You can set it on a query-by-query basis too, so you don't need to change the configuration. If you do, a reload is enough to make PG pick it up, so it's an easy thing to try.
В списке pgsql-performance по дате отправления: