Re: performance

Поиск
Список
Период
Сортировка
От Chris Albertson
Тема Re: performance
Дата
Msg-id 38EADCF1.FFD8F30F@jps.net
обсуждение исходный текст
Ответ на performance  (Joe Conway <jconway2@home.com>)
Список pgsql-admin
I'm working with some large tables too.  Around 10x your size and due to become
maybe 20x larger.  The good news is that Linux will use all the "extra" RAM it
has for a disk cache.  You don't have to do anything.  Postgresql has it's
own cache too.  Use the "-B" option to make this buffer cache large.  I use
-B10000.  Also use the "-F" option to turn off the fsync and buy an UPS for
the computer.  The biggest performance boost I got is when I discoved that the
COPY command is an order of magnitude faster then INSERT.  Experiment with
indexies to speed querries.  Experiment usually there are several ways to
write a query.  One way may be faster.

The "top" display is a big help while tunning your system.  Your goal is
to get the CPU(s) to near 100% utilization.  If there is much idle CPU time
it means you are I/O bound and could use more RAM or a bigger -B value.

Joe Conway wrote:
>
> Hello,
>
> I'm currently working with a development database, PostgreSQL 6.5.2 on RedHat 6.1 Linux. There is one fairly large
table(currently ~ 1.3 million rows) which will continue to grow at about 500k rows per week (I'm considering various
optionsto periodically archive or reduce the collected data). Is there anything I can do to cache some or all of this
tablein memory in order to speed queries against it? The physical file is about 130 MB. The server is a dual Pentium
Pro200 with 512 MB of RAM. 
>
> Any suggestions would be appreciated.
>
> Joe Conway
>
> p.s. I tried to search the archives, but it did not return any results with even the simplest of searches.

--
   --Chris Albertson             home: chrisja@jps.net
     Redondo Beach, California   work: calbertson@logicon.com

В списке pgsql-admin по дате отправления:

Предыдущее
От: CTN Production
Дата:
Сообщение: Backup/restore large objects?
Следующее
От: Charles Martin
Дата:
Сообщение: backends die on pg_dump, and spurious files?