Re: GROUP BY on a large table -- an idea
| От | Martijn van Oosterhout |
|---|---|
| Тема | Re: GROUP BY on a large table -- an idea |
| Дата | |
| Msg-id | 20061012095726.GC11723@svana.org обсуждение исходный текст |
| Ответ на | GROUP BY on a large table -- an idea ("Dawid Kuroczko" <qnex42@gmail.com>) |
| Список | pgsql-hackers |
On Thu, Oct 12, 2006 at 09:52:11AM +0200, Dawid Kuroczko wrote: > Recently I've been playing with quite a big table (over 50mln rows), > and did some SELECT ... sum(...) WHERE ... GROUP BY ... queries. > > The usual plan for these is to sort the entries according to GROUP BY > specification, then to run aggregates one by one. If the data to be > sorted is large enough, PostgreSQL has no other option than to spill > to disk, which well, Isn't the fastest... <snip> Sounds an awful lot like the HashAggregate nodetype which has existed since at least 7.4. It has a hashtable of "keys" with attached "states". Hope this helps, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to litigate.
В списке pgsql-hackers по дате отправления: