Обсуждение: Re: [Slony1-general] Using slony with many schema's

Поиск
Список
Период
Сортировка

Re: [Slony1-general] Using slony with many schema's

От
snacktime
Дата:
First, thanks for all the feedback.   After spending some more time
evaluating what we would gain by using slony I'm not sure it's worth
it.  However I thought I would get some more feedback before
finalizing that decision.

The primary reason for looking at replication was to move cpu
intensive SELECT queries to a slave.  However, by moving away from
schema's the report queries for all clients on the server become more
cpu intensive instead of just the clients with large data sets.  The
average distribution is that 95% of our clients have less then 5000
rows in any table, and the other 5% can have hundreds of thousands.
So by putting all the data into one schema, every report query now
gets run against a million or more rows instead of just a few  hundred
or thousand.  So all clients will see a drop in query performance
instead of just the clients with large amounts of data.

Chris

Re: [Slony1-general] Using slony with many schema's

От
snacktime
Дата:
Sorry wrong list, this was meant for the slony list...

Chris

Re: [Slony1-general] Using slony with many schema's

От
Vivek Khera
Дата:
On Oct 11, 2006, at 2:55 PM, snacktime wrote:

> So by putting all the data into one schema, every report query now
> gets run against a million or more rows instead of just a few  hundred
> or thousand.  So all clients will see a drop in query performance
> instead of just the clients with large amounts of data.

Indexes on the customer_id field of the combined data tables helps a
lot. That and big hardware with big RAM. :-)

We store data for all our customers in the same tables.  some have
several hundred thousand of their own customers, and millions of
transactions from them; others have a few hundred.  The
responsiveness of postgres is still great.


Вложения