Re: large INSERT leads to "invalid memory alloc"

Поиск
Список
Период
Сортировка
От Stefan Froehlich
Тема Re: large INSERT leads to "invalid memory alloc"
Дата
Msg-id 20121125193945.GA4894@89-149-202-102.internetserviceteam.com
обсуждение исходный текст
Ответ на Re: large INSERT leads to "invalid memory alloc"  (Pavel Stehule <pavel.stehule@gmail.com>)
Ответы Re: large INSERT leads to "invalid memory alloc"  (Pavel Stehule <pavel.stehule@gmail.com>)
Re: large INSERT leads to "invalid memory alloc"  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
On Sun, Nov 25, 2012 at 06:57:22PM +0100, Pavel Stehule wrote:
> > | INSERT INTO a (id, fkid, displayorder, name, description, internal, mimetype, mimedata, filesize) VALUES (73,6,5,
E'Seefeld.rar',E'Seefeld',0, E'application/octet-stream',decode('5261...0700', 'hex'),311484587); 

> Attention - BYTEA is not BLOB and although physical limit is 1G - real
> limit is significantly less - depends on RAM - 7years ago we found so
> practical limit is about 20MB.

Oops, that's not too much. In the docs I've seen a 4-byte length
descriptor, thus expected a size limit of 4 GB and felt quit safe
with a maximum size of 300 MB.

> If you need more, use blobs instead or you can divide value to more blocks
> http://www.fuzzy.cz/en/articles/storing-files-in-a-postgresql-database/

Yes, storing large data objects in the file system is advisable for
several reasons - we've had the same discussion times ago with mysql as
well. But the decision was made to keep it in the database (and there is
only one object of this size anyways). Rewriting the framework is not an
option at the moment.

If I fail to migrate this into postgresql, we'd rather cancel the
transition.

Stefan


В списке pgsql-general по дате отправления:

Предыдущее
От: Peter Geoghegan
Дата:
Сообщение: Re: What happens to a primary key b-tree index when a table tuple is deleted by a transaction?
Следующее
От: Bexley Hall
Дата:
Сообщение: Re: Query caching absent "query caching"