Re: Are 50 million rows a problem for postgres ?
От | Bruno Wolff III |
---|---|
Тема | Re: Are 50 million rows a problem for postgres ? |
Дата | |
Msg-id | 20030908132108.GD14906@wolff.to обсуждение исходный текст |
Ответ на | Re: Are 50 million rows a problem for postgres ? (Vasilis Ventirozos <vendi@cosmoline.com>) |
Список | pgsql-admin |
On Mon, Sep 08, 2003 at 13:26:05 +0300, Vasilis Ventirozos <vendi@cosmoline.com> wrote: > This is a simple statement that i run > > core_netfon=# EXPLAIN select spcode,count(*) from callticket group by spcode; > QUERY PLAN > --------------------------------------------------------------------------------------- > Aggregate (cost=2057275.91..2130712.22 rows=979151 width=4) > -> Group (cost=2057275.91..2106233.45 rows=9791508 width=4) > -> Sort (cost=2057275.91..2081754.68 rows=9791508 width=4) > Sort Key: spcode > -> Seq Scan on callticket (cost=0.00..424310.08 rows=9791508 > width=4) > (5 rows) In addition to making the changes to the config file as suggested in other responses, you may also want to do some testing with the 7.4 beta. Hash aggreates will most likely speed this query up alot (assuming there aren't millions of unique spcodes). The production release of 7.4 will probably happen in about a month.
В списке pgsql-admin по дате отправления: