Re: [WIP] Effective storage of duplicates in B-tree index.
| От | Thom Brown |
|---|---|
| Тема | Re: [WIP] Effective storage of duplicates in B-tree index. |
| Дата | |
| Msg-id | CAA-aLv610dx+4KyTCpKhvX+vSSJcO7i7BwhTz4jJCyEX2k1rwA@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: [WIP] Effective storage of duplicates in B-tree index. (Peter Geoghegan <pg@heroku.com>) |
| Список | pgsql-hackers |
On 28 January 2016 at 17:09, Peter Geoghegan <pg@heroku.com> wrote: > On Thu, Jan 28, 2016 at 9:03 AM, Thom Brown <thom@linux.com> wrote: >> I'm surprised that efficiencies can't be realised beyond this point. Your results show a sweet spot at around 1000 /10000000, with it getting slightly worse beyond that. I kind of expected a lot of efficiency where all the values are thesame, but perhaps that's due to my lack of understanding regarding the way they're being stored. > > I think that you'd need an I/O bound workload to see significant > benefits. That seems unsurprising. I believe that random I/O from > index writes is a big problem for us. I was thinking more from the point of view of the index size. An index containing 10 million duplicate values is around 40% of the size of an index with 10 million unique values. Thom
В списке pgsql-hackers по дате отправления: