Обсуждение: Array, bytea and large objects

Поиск
Список
Период
Сортировка

Array, bytea and large objects

От
David Wall
Дата:
I am trying to assess the db issues surrounding several constructs
allowed in PG 8.3, including ARRAY, BYTEA and large objects (LO).

We store a lot of data as encrypted XML structures (name-value pairs
mostly) that can be updated many times during its lifetime (most updates
occur over several days and then the data tends to change no more), as
well as storing images and uploaded files (these rarely change and are
only inserted/deleted).  We currently use LO for all of these.  We
mostly use the JDBC library for access to PG.

First, LOs seem to allow an OID column to be added to any number of
tables, but is it true that the actual large object data is stored in a
single table (pg_largeobject?).  If so, wouldn't this become a
bottleneck if LOs were used frequently?  Even vacuuming and vacuumlo
must create a lot of pressure on that one table if LOs are used
extensively.  And can you backup a table with an OID column and get only
those LOs referenced in the dump?

Does the JDBC library support LO streaming?  Can I receive data,
compress, encrypt and stream into the database as well as do the
opposite when reading it back?

If I have an "unlimited" number of name-value pairs that I'd like to get
easy access to for flexible reports, could I store these in two arrays
(one for name, the other for value) in a table so that if I had 10
name-value pairs or 200 name-value pairs, I could store these into a
single row using arrays so I could retrieve all name-value pairs in a
single SELECT from the db?  How are these arrays stored -- does it use
an underlying type like LO or BYTEA?

How big can an LO get?  Is it 2GB?
How many LO fields can I have in a database?
It seems that the LO may even be implemented as an OID with one or more
BYTEA storage structure in the pg_largeobject table (loid,pageno,data).
Is that true?

How big is a "page"?  Maybe an LO is more efficient than a BYTEA if it's
bigger than one page?

How big can a BYTEA get?  Is it 1GB?
At what size does it make more sense to store in LO instead of a BYTEA
(because of all the escaping and such)?
How many BYTEA fields can I have in a database?
Are the BYTEA fields stored in the same table as the rest of the data?
I believe this is yes, so a backup of that table will include the binary
data, too, correct?

How big can an ARRAY get?  Is it 1GB?
How many ARRAY fields can I have in a table or database?   Are there
limits?
Are the ARRAY fields stored in the same table as the rest of the data?

Sorry for all the questions, but I'm trying to research it but the info
is not always clear (and perhaps some of the stuff I find is not even true).

I am wondering if when my encrypted XML data is small, should I choose
to store it in a table using BYTEA so that each "record" in my
application (which uses the encrypted XML name-value storage) is not
forced to be in a single pg_largeobject table, and use LO when my data
reaches a threshold size?  Thoughts?

Thanks,
David

Re: Array, bytea and large objects

От
Harald Fuchs
Дата:
In article <4989E659.3000706@computer.org>,
David Wall <d.wall@computer.org> writes:

> If I have an "unlimited" number of name-value pairs that I'd like to
> get easy access to for flexible reports, could I store these in two
> arrays (one for name, the other for value) in a table so that if I had
> 10 name-value pairs or 200 name-value pairs, I could store these into
> a single row using arrays so I could retrieve all name-value pairs in
> a single SELECT from the db?

I would use the hstore contrib module for that.

Re: Array, bytea and large objects

От
Filip Rembiałkowski
Дата:


2009/2/4 David Wall <d.wall@computer.org>

First, LOs seem to allow an OID column to be added to any number of tables, but is it true that the actual large object data is stored in a single table (pg_largeobject?).

yes.
 
 If so, wouldn't this become a bottleneck if LOs were used frequently?  Even vacuuming and vacuumlo must create a lot of pressure on that one table if LOs are used extensively.  And can you backup a table with an OID column and get only those LOs referenced in the dump?

Does the JDBC library support LO streaming?  Can I receive data, compress, encrypt and stream into the database as well as do the opposite when reading it back?

http://jdbc.postgresql.org/documentation/83/index.html
see Large Objects and "Storing Binary Data"

transparent encryption is not implemented in the driver but of course it is possible.



If I have an "unlimited" number of name-value pairs that I'd like to get easy access to for flexible reports, could I store these in two arrays (one for name, the other for value) in a table so that if I had 10 name-value pairs or 200 name-value pairs, I could store these into a single row using arrays so I could retrieve all name-value pairs in a single SELECT from the db?

yes you could, but what for?
what's wrong with many rows? create table kvstore(key text primary key,value text);
what stops you from using single SELECT to get all these kv pairs?

 
 How are these arrays stored -- does it use an underlying type like LO or BYTEA?

How big can an LO get?  Is it 2GB?
no, much larger. I guess it's limited by max table size.

see http://www.postgresql.org/docs/faqs.FAQ.html#item4.4
 

How many LO fields can I have in a database?

as many as the max number of different OIDs .. minus the number of system objects
so I think 2^31 large objects is possible without problem.

 
It seems that the LO may even be implemented as an OID with one or more BYTEA storage structure in the pg_largeobject table (loid,pageno,data).  Is that true?

that's how it is actually implemented - you probably know it already :)
 


How big is a "page"?  Maybe an LO is more efficient than a BYTEA if it's bigger than one page?
default and recommended page size is 8 kB.
 


How big can a BYTEA get?  Is it 1GB?
Yes.
 

At what size does it make more sense to store in LO instead of a BYTEA (because of all the escaping and such)?
How many BYTEA fields can I have in a database?
no limit (other than limits mentioned in the FAQ)
 
Are the BYTEA fields stored in the same table as the rest of the data?  

yes - and the TOAST tables if it's larger than 1/3 of a page or so. search for TOAST details if you're interested.

 
I believe this is yes, so a backup of that table will include the binary data, too, correct?
yes
 

How big can an ARRAY get?  Is it 1GB?
yes
 

How many ARRAY fields can I have in a table or database?   Are there limits?
same as bytea or any other type
 

Are the ARRAY fields stored in the same table as the rest of the data?

I guess ARRAYs are serialized and stored according to same rules as any other data (TOAST mechanism).

 


Sorry for all the questions, but I'm trying to research it but the info is not always clear (and perhaps some of the stuff I find is not even true).

most of your questions are answered in the documentation and FAQ.
 


I am wondering if when my encrypted XML data is small, should I choose to store it in a table using BYTEA so that each "record" in my application (which uses the encrypted XML name-value storage) is not forced to be in a single pg_largeobject table, and use LO when my data reaches a threshold size?  Thoughts?

I'd avoid LO unless you really need streaming (block-wise) access.


cheers,

--
Filip Rembiałkowski
JID,mailto:filip.rembialkowski@gmail.com
http://filip.rembialkowski.net/

Re: Array, bytea and large objects

От
David Wall
Дата:
Thanks, Filip.
If I have an "unlimited" number of name-value pairs that I'd like to get easy access to for flexible reports, could I store these in two arrays (one for name, the other for value) in a table so that if I had 10 name-value pairs or 200 name-value pairs, I could store these into a single row using arrays so I could retrieve all name-value pairs in a single SELECT from the db?

yes you could, but what for?
what's wrong with many rows? create table kvstore(key text primary key,value text);
what stops you from using single SELECT to get all these kv pairs?

We basically do it that way now, but was thinking we might run tests to see if it's faster.  When we run reports, only 2-3 of the name-value pairs are used in search criteria, so these we'd like to keep in such a table, but other fields (typically 10-30 name-value pairs) are just listed in the report, so we thought it might make sense to keep these in a single row for efficiency sake as we do retrieve them in a group and don't need to sort or select based on their values.  "Single SELECT" was poor word choice as we were thinking more about retrieving a single row with 10-30 name-values stored in an ARRAY would be faster than retrieve 10-30 rows from a joined table.


At what size does it make more sense to store in LO instead of a BYTEA (because of all the escaping and such)?
How many BYTEA fields can I have in a database?
no limit (other than limits mentioned in the FAQ)
 
Are the BYTEA fields stored in the same table as the rest of the data?  

yes - and the TOAST tables if it's larger than 1/3 of a page or so. search for TOAST details if you're interested.
Hmm...  So a page is 8192 bytes, and it leaves your regular table and goes to TOAST if the BYTEA is more than 2730 bytes.  I thought it only went to TOAST when it exceed the page size, not just one-third of its size.  I am sure we have lots of encrypted, compressed XML (so it's all binary at this point, no longer text) that would be less than that.  So perhaps it makes sense to use BYTEA for these smaller binary objects as the data is stored with the row, is simpler to deal with, easy to handle in memory (unlike really big LOs), and the cost of escaping each byte may not be too high.

I have seen a comparison (http://zephid.dk/2008/08/09/oid-vs-bytea-in-postgresql/) that show BYTEA uses more memory (up to 10x more) and is slower (about 4x slower) than LOs, which indicate that most of this is due to escaping the bytes.

I'd avoid LO unless you really need streaming (block-wise) access.
This is interesting only because we've done the opposite.  That is, we store all binary data (mostly compressed, encrypted XML name-values) in LOs today and it works well.  But we are concerned about the pg_largeobject table being a bottleneck, becoming an issue for vaccum/vacuumlo/pg_dump as our database grows.

We'd like to do streaming for large files being uploaded, but today we're not doing that and have a java.sql.Blob interface class that essentially reads/writes using a byte array so we're not getting any benefits of streaming for very large objects, though as I said, most of our LOs are really not that big and thus not an issue for us.  We'll see what it means for us to change this to better support streaming for our truly large objects that we store.

Since you'd avoid LOs, what are the main advantages of BYTEA (since we use JDBC, we can use both with equal ease as both currently work for us using byte arrays in our Java code)?  I'm still thinking we may find that based on the size of the binary data, it may be best to choose BYTEA for smaller (< 8196 or < 2730) data and LOs elsewhere.

Thanks,
David

Re: Array, bytea and large objects

От
Filip Rembiałkowski
Дата:


2009/2/5 David Wall <d.wall@computer.org>


 
Are the BYTEA fields stored in the same table as the rest of the data?  

yes - and the TOAST tables if it's larger than 1/3 of a page or so. search for TOAST details if you're interested.
Hmm...  So a page is 8192 bytes, and it leaves your regular table and goes to TOAST if the BYTEA is more than 2730 bytes.  I thought it only went to TOAST when it exceed the page size, not just one-third of its size.

I wrote "or so" because I did not remember all details.
 - check http://www.postgresql.org/docs/8.3/static/storage-toast.html
for 100% accurate explanation.

cheers
Filip


--
Filip Rembiałkowski
JID,mailto:filip.rembialkowski@gmail.com
http://filip.rembialkowski.net/