Обсуждение: Strange optimization - xmin,xmax compression :)

Поиск
Список
Период
Сортировка

Strange optimization - xmin,xmax compression :)

От
pasman pasmański
Дата:
hello.

i tested how are distributed values xmin,xmax on pages.
in my tables . typically there are no more than 80 records
on pages.

maybe its possible to compress xmin & xmax values to
1 byte/per record (+table of transactions per page)?
its reduce the space when more than 1 record is
from the same transaction.


Testing query:

SELECT
  (string_to_array(ctid::text,','))[1] as page
  ,count(*) as records
  ,array_upper(array_agg(distinct (xmin::text)),1) as trans
FROM only
  "Rejestr stacji do naprawy"
group by
  (string_to_array(ctid::text,','))[1]
order by
  3 desc

------------
pasman

Re: Strange optimization - xmin,xmax compression :)

От
Robert Haas
Дата:
2010/12/6 pasman pasmański <pasman.p@gmail.com>:
> hello.
>
> i tested how are distributed values xmin,xmax on pages.
> in my tables . typically there are no more than 80 records
> on pages.
>
> maybe its possible to compress xmin & xmax values to
> 1 byte/per record (+table of transactions per page)?
> its reduce the space when more than 1 record is
> from the same transaction.

Not a bad idea, but not easy to implement, I think.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Strange optimization - xmin,xmax compression :)

От
Jim Nasby
Дата:
On Dec 17, 2010, at 8:46 PM, Robert Haas wrote:
> 2010/12/6 pasman pasmański <pasman.p@gmail.com>:
>> hello.
>>
>> i tested how are distributed values xmin,xmax on pages.
>> in my tables . typically there are no more than 80 records
>> on pages.
>>
>> maybe its possible to compress xmin & xmax values to
>> 1 byte/per record (+table of transactions per page)?
>> its reduce the space when more than 1 record is
>> from the same transaction.
>
> Not a bad idea, but not easy to implement, I think.

Another option that would help even more for data warehousing would be storing the XIDs at the table level, because
you'lltypically have a very limited number of transactions per table. 

But as Robert mentioned, this is not easy to implement. The community would probably need to see some pretty compelling
performancenumbers to even consider it. 
--
Jim C. Nasby, Database Architect                   jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net