large object write performance

Поиск
Список
Период
Сортировка
От Bram Van Steenlandt
Тема large object write performance
Дата
Msg-id 561634BD.2000309@diomedia.be
обсуждение исходный текст
Ответы Re: large object write performance  ("Graeme B. Bell" <graeme.bell@nibio.no>)
Re: large object write performance  ("Graeme B. Bell" <graeme.bell@nibio.no>)
Список pgsql-performance
Hi,

I use postgresql often but I'm not very familiar with how it works internal.

I've made a small script to backup files from different computers to a
postgresql database.
Sort of a versioning networked backup system.
It works with large objects (oid in table, linked to large object),
which I import using psycopg

It works well but slow.

The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
If I copy a file to the mirror using scp I get 37MB/sec
My script achieves something like 7 or 8MB/sec on large (+100MB) files.

I've never used postgresql for something like this, is there something I
can do to speed things up ?
It's not a huge problem as it's only the initial run that takes a while
(after that, most files are already in the db).
Still it would be nice if it would be a little faster.
cpu is mostly idle on the server, filesystem is running 100%.
This is a seperate postgresql server (I've used freebsd profiles to have
2 postgresql server running) so I can change this setup so it will work
better for this application.

I've read different suggestions online but I'm unsure which is best,
they all speak of files which are only a few Kb, not 100MB or bigger.

ps. english is not my native language

thx
Bram


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Graeme B. Bell"
Дата:
Сообщение: Re: One long transaction or multiple short transactions?
Следующее
От: "Graeme B. Bell"
Дата:
Сообщение: Re: large object write performance