Table performance with millions of rows

Поиск
Список
Период
Сортировка
От Robert Blayzor
Тема Table performance with millions of rows
Дата
Msg-id 7DF18AB9-C4A4-4C28-957D-12C00FCB5F71@inoc.net
обсуждение исходный текст
Ответы Re: Table performance with millions of rows (partitioning)  (Justin Pryzby <pryzby@telsasoft.com>)
Список pgsql-performance
Question on large tables…


When should one consider table partitioning vs. just stuffing 10 million rows into one table?

I currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large.


At some point I’ll want to prune these records out, so being able to just drop or truncate the table in one shot makes
childtable partitions attractive. 


From a pure data warehousing standpoint, what are the do’s/don’t of keeping such large tables?

Other notes…
- This table is never updated, only appended (CDR’s)
- Right now daily SQL called to delete records older than X days. (costly, purging ~100k records at a time)



--
inoc.net!rblayzor
XMPP: rblayzor.AT.inoc.net
PGP:  https://inoc.net/~rblayzor/
















В списке pgsql-performance по дате отправления:

Предыдущее
От: David Miller
Дата:
Сообщение: Re: Batch insert heavily affecting query performance.
Следующее
От: Justin Pryzby
Дата:
Сообщение: Re: Table performance with millions of rows (partitioning)