Row Size and Indexing

Поиск
Список
Период
Сортировка
От tschak
Тема Row Size and Indexing
Дата
Msg-id 1137163708.267913.192180@o13g2000cwo.googlegroups.com
обсуждение исходный текст
Ответы Re: Row Size and Indexing  (Martijn van Oosterhout <kleptog@svana.org>)
Список pgsql-general
Hi guys,

I have some questions about postgres and would be glad if you could
help me...
Here ist what I am planning to do:
Build a database with 1 Billion (10^9) rows describing geometric
properties:
create table steric_descriptor(
distance0 bit(4),
distance1 bit(4,
distance2 bit(4,
distance3 bit(4),
distance4 bit(4),
distance5 bit(4),
distance6 bit(4),
distance7 bit(4),
comp_desc BLOB);

Question Nr ONE --- Concerning row size
Is it correct that I"ll need:
27 Byte (rowdescriptor) +4Byte (OID) + 4 Byte (for the distances) +
BLOB_SIze (say 30 BYTE)
=> 65 Byte per Tupel dending on the storage space of the BLOB (in
BLOB_Space or the tupel itself)?
Is it possible to index the row if I disable the ROWID to save storage
space in order to get more tupels per page?

QUESTION NR. TWO --- Indexing
My query will look like this:
SELECT p from steric_descriptor WHERE
p.distance0 <= threshold0
AND ... ... ... ...
AND p.distance7 <= threshold7;
Acutally I was planning to use 8 distinct Bitmap-Indices with 15 Bits
(2Byte) each per row (16 Byte in total) but as far as I know postgres
does not offer a real disk-resident Bitmap-Index... Is that true? If
yes, I would have to use composite B*trees which do not perform well
for composite range queries. This leads to the new Bitmap-Scan of
PostgreSQL => 8 B*Trees with a 4Bit key and blocksize/rowsize +1
pointer (4Byte each). This means: approx. twice the size for the
B*Tree-Version just to use the same functionality as a real
Bitmap-Index would offer...
Is there a way to avoid this or are my calculations somewhat wrong
(maybe even totally)?

I would appreciate it if someone could verify my thoughts or give me a
hint how to proceed... I know that it will be quite a big database and
I am not even sure if PostgreSQL can handle such large tables
efficiently.

Question Nr. THREE:
If I do need OIDs for indexing and I have 10^9 Rows in my DB, do the
Indices need Oids as well... If yes I might get problems with the 4
Billion size limit of the Oids, is that correct? I don"t know how the
physical structure of an Index looks like, so I might be wrong on this
as well...


Thanks in advance,

Chuck


В списке pgsql-general по дате отправления:

Предыдущее
От: Shawn Garbett
Дата:
Сообщение: Cube/Rollup/ and Connect by prior
Следующее
От: "Rudolph"
Дата:
Сообщение: Error when inserting millions of records