Re: Large number of tables slow insert
| От | Scott Marlowe |
|---|---|
| Тема | Re: Large number of tables slow insert |
| Дата | |
| Msg-id | dcc563d10808260829s6d397b7egd364589c9c5b16b1@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: Large number of tables slow insert (Matthew Wakeling <matthew@flymine.org>) |
| Список | pgsql-performance |
On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <matthew@flymine.org> wrote: > On Sat, 23 Aug 2008, Loic Petit wrote: >> >> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount >> of sensors. In order to have good >> performances on querying by timestamp on each sensor, I partitionned my >> measures table for each sensor. Thus I create >> a lot of tables. > > As far as I can see, you are having performance problems as a direct result > of this design decision, so it may be wise to reconsider. If you have an > index on both the sensor identifier and the timestamp, it should perform > reasonably well. It would scale a lot better with thousands of sensors too. Properly partitioned, I'd expect one big table to outperform 3,000 sparsely populated tables.
В списке pgsql-performance по дате отправления: