Re: Are large objects well supported? Are they considered very stableto use?

Поиск
Список
Период
Сортировка
От Cary O'Brien
Тема Re: Are large objects well supported? Are they considered very stableto use?
Дата
Msg-id 199903300452.XAA05758@saltmine.radix.net
обсуждение исходный текст
Список pgsql-hackers
I'd stay away from PostgreSQL large objects for now.

Two big problems:

1)  Minimum size is 16K
2)  They all end up in the same directory as your regular   tables.

If you need to store a lot of files in the 10-20-30K size, I'd
suggest first trying the unix file system, but hash them into some
sort of subdirectory structure so as to have not so many in each
directory.  256 per directory is nice, so give each file a 32 bit
id, store the id and the key information in postgresql, and when
you need file 0x12345678, go to 12/34/56/12345678.txt.  You could
be smarter about the hashing so the bins filled evenly.  Either way
you can spread the load out over different file systems with
soft links.

If space is at a preimum, and your files are compressable, you can
do what we did on one project:  batch the files up into batches of,
say, about 32k (i.e. keep adding files till the aggregate gets over
32k), store start and end offsets for each file in the database, and 
gzip each batch.  gzip -d -c can tear through whatever your 32K compresses
down to pretty quickly, and a little bit of C or perl can discard the unwanted
leading part of the file pretty quickly too.  You can store the blocks 
themselves hashed as described above.

Have fun,
Drop me a line if I can help.
-- cary
cobrien@radix.net






В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Jackson, DeJuan"
Дата:
Сообщение: RE: [SQL] indexing a datetime by date
Следующее
От: Michael Davis
Дата:
Сообщение: Some 6.5 regression tests are failing