Re: Large objetcs performance

Поиск
Список
Период
Сортировка
От Ulrich Cech
Тема Re: Large objetcs performance
Дата
Msg-id 4629BCC7.1090900@cech-privat.de
обсуждение исходный текст
Ответ на Large objetcs performance  ("Alexandre Vasconcelos" <alex.vasconcelos@gmail.com>)
Список pgsql-performance
Hello Alexandre,

<
We have an application subjected do sign documents and store them somewhere.>

I developed a relative simple "file archive" with PostgreSQL (web application with JSF for user interface). The major structure is one table with some "key word fields", and 3 blob-fields (because exactly 3 files belong to one record). I have do deal with millions of files (95% about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file system (try to open a directory with 300T files on a windows system... it's horrible, even on the command line).

The database now is 12Gb, but searching with the web interface has a maximum of 5 seconds (most searches are faster). The one disadvantage is the backup (I use pg_dump once a week which needs about 10 hours). But for now, this is acceptable for me. But I want to look at slony or port everything to a linux machine.

Ulrich

В списке pgsql-performance по дате отправления:

Предыдущее
От: Colin McGuigan
Дата:
Сообщение: Odd problem with planner choosing seq scan
Следующее
От: chrisj
Дата:
Сообщение: seeking advise on char vs text or varchar in search table