bytea performance issue

Поиск
Список
Период
Сортировка
От brian stone
Тема bytea performance issue
Дата
Msg-id 195442.63328.qm@web59005.mail.re1.yahoo.com
обсуждение исходный текст
Ответы Re: bytea performance issue  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
I have to store binary data in a table, ranging from 512K - 1M.  I am getting very poor performance when inserting this data.

create table my_stuff (data bytea);

I then try to insert 10 1M blobs into this table using PQexecParams from C.  It takes ~10 seconds to insert the 10 records.

The test is being performed locally so this is not a network issue.  If I change the data I send from the client to 1M worth of text, and change the 'my_stuff.data' to TEXT, the same test takes ~2 seconds.  Has anyone else seen this performance issue with bytea?

I looked at large objects but that is limited to an Oid and I will need more than 4 billion unique identifiers.  Maybe not 4 billion objects at once, but I will burn through them quickly.

I looked at the byteain and byteaout functions in the postgresql source and there is quite a bit of processing going on - escape stuff.  Although, this is for text-to-internal correct?  If I use PGexecParams setting the format to binary, the backend should use the recv/send functions ... right?

I don't need this escaping; this information never needs to be in text form.  Is it possible to create a user-defined type that's only binary?  It looks like I am forced to defined an input and output function.  The input function takes a CString.

Any other solutions for getting binary data into postgresql?  Suggestions?

skye


Finding fabulous fares is fun.
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains.

В списке pgsql-general по дате отправления:

Предыдущее
От: "Adam Rich"
Дата:
Сообщение: Re: Duplicate key violation
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Problem loading pg_dump file