Re: page compression
От | Simon Riggs |
---|---|
Тема | Re: page compression |
Дата | |
Msg-id | 1294011362.2090.4214.camel@ebony обсуждение исходный текст |
Ответ на | page compression (Andy Colson <andy@squeakycode.net>) |
Ответы |
Re: page compression
|
Список | pgsql-hackers |
On Tue, 2010-12-28 at 09:10 -0600, Andy Colson wrote: > I know its been discussed before, and one big problem is license and > patent problems. Would like to see a design for that. There's a few different ways we might want to do that, and I'm interested to see if its possible to get compressed pages to be indexable as well. For example, if you compress 2 pages into 8Kb then you do one I/O and out pops 2 buffers. That would work nicely with ring buffers. Or you might try to have pages > 8Kb in one block, which would mean decompressing every time you access the page. That wouldn't be much of a problem if we were just seq scanning. Or you might want to compress the whole table at once, so it can only be read by seq scan. Efficient, but not indexes. It would be interesting to explore pre-populating the compression dictionary with some common patterns. Anyway, interesting topic. -- Simon Riggs http://www.2ndQuadrant.com/books/PostgreSQL Development, 24x7 Support, Training and Services
В списке pgsql-hackers по дате отправления: