Queries against multi-million record tables.

Поиск
Список
Период
Сортировка
От Michael Miyabara-McCaskey
Тема Queries against multi-million record tables.
Дата
Msg-id 000b01c08b3c$9ca08cc0$c700a8c0@ncc1701e
обсуждение исходный текст
Список pgsql-admin
Hello all,

I am in the midst of taking a development DB into production, but the
performance has not been very good so far.

The DB is a decision based system, that currently has queries against tables
with up to 20million records (3GB table sizes), and at this point about a
25GB DB in total. {Later down the road up to 60million records and a DB of
up to 150GB is planned).

As I understand it, Oracle has some product called "parallel query" which
splits the table queried into 10 pieces and then does each one across as
many CPUs as possible, then puts it all back together again.

So my question is... based upon the messages I have read here, it does not
appear that PostgreSQL makes use of multiple CPUs, but only hands the next
query off to the next processor based upon operating system rules.

Therefore, what are some good ways to handle such large amounts of
information using PostgreSQL?

Michael Miyabara-McCaskey
Email: mykarz@miyabara.com
Web: http://www.miyabara.com/mykarz/
Mobile: +1 408 504 9014


В списке pgsql-admin по дате отправления:

Предыдущее
От: lbottorff@harveycounty.com
Дата:
Сообщение: When is a constraint too big?
Следующее
От: microx_2000@yahoo.com
Дата:
Сообщение: startup Postgres on NT