large dataset with write vs read clients

Поиск
Список
Период
Сортировка
От Aaron Turner
Тема large dataset with write vs read clients
Дата
Msg-id AANLkTimNJc=0ffgRO_gO6Vsq06=aqBYXZouhbv0kBe-i@mail.gmail.com
обсуждение исходный текст
Ответы Re: large dataset with write vs read clients  (Dan Harris <fbsd@drivefaster.net>)
Re: large dataset with write vs read clients  (Stephen Frost <sfrost@snowman.net>)
Re: large dataset with write vs read clients  (Stephen Frost <sfrost@snowman.net>)
Re: large dataset with write vs read clients  (Greg Smith <greg@2ndquadrant.com>)
Список pgsql-performance
currently PG 8.1.3.  See attached for my postgresql.conf.   Server is
freebsd 6.2 w/ a fast 3TB storage array and only 2GB of ram.

We're running RTG which is a like mrtg, cricket, etc.  basically
queries network devices via SNMP, throws stats into the DB for making
pretty bandwidth graphs.  We've got hundreds of devices, with 10K+
ports and probably 100K's of stats being queried every 5 minutes.  In
order to do all that work, the back end SNMP querier is multi-threaded
and opens a PG connection per-thread.  We're running 30 threads.  This
is basically all INSERTS, but only ends up to being about 32,000/5
minutes.

The graphing front end CGI is all SELECT.  There's 12k tables today,
and new tables are created each month.  The number of rows per table
is 100-700k, with most in the 600-700K range.  190GB of data so far.
Good news is that queries have no joins and are limited to only a few
tables at a time.

Basically, each connection is taking about 100MB resident.  As we need
to increase the number of threads to be able to query all the devices
in the 5 minute window, we're running out of memory.  There aren't
that many CGI connections at anyone one time, but obviously query
performance isn't great, but honestly is surprisingly good all things
considered.

Honestly, not looking to improve PG's performance, really although I
wouldn't complain.  Just better manage memory/hardware.  I assume I
can't start up two instances of PG pointing at the same files, one
read-only and one read-write with different memory profiles, so I
assume my only real option is throw more RAM at it.   I don't have $$$
for another array/server for a master/slave right now.   Or perhaps
tweaking my .conf file?  Are newer PG versions more memory efficient?

Thanks,
Aaron

--
Aaron Turner
http://synfin.net/         Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
    -- Benjamin Franklin
"carpe diem quam minimum credula postero"

Вложения

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance