syslog.conf
От | Kwan Lai Sum |
---|---|
Тема | syslog.conf |
Дата | |
Msg-id | E5DC0E26B2C2D41181F700508BAD31E726E469@SMTP0101 обсуждение исходный текст |
Список | pgsql-novice |
Hello all How do I set up (config) the syslog.conf in red hat 7.0 so that I can capture the log of other machine ?? Is there any additional works needed to do ?? Thanks Sum > -----Original Message----- > From: Michael Miyabara-McCaskey [SMTP:mykarz@miyabara.com] > Sent: Wednesday, January 31, 2001 12:17 PM > To: pgsql-general@postgresql.org; pgsql-admin@postgresql.org; > pgsql-novice@postgresql.org > Subject: [ADMIN] Queries against multi-million record tables. > > Hello all, > > I am in the midst of taking a development DB into production, but the > performance has not been very good so far. > > The DB is a decision based system, that currently has queries against > tables > with up to 20million records (3GB table sizes), and at this point about a > 25GB DB in total. {Later down the road up to 60million records and a DB of > up to 150GB is planned). > > As I understand it, Oracle has some product called "parallel query" which > splits the table queried into 10 pieces and then does each one across as > many CPUs as possible, then puts it all back together again. > > So my question is... based upon the messages I have read here, it does not > appear that PostgreSQL makes use of multiple CPUs, but only hands the next > query off to the next processor based upon operating system rules. > > Therefore, what are some good ways to handle such large amounts of > information using PostgreSQL? > > Michael Miyabara-McCaskey > Email: mykarz@miyabara.com > Web: http://www.miyabara.com/mykarz/ > Mobile: +1 408 504 9014
В списке pgsql-novice по дате отправления: