optimizing db for small table with tons of updates

Поиск
Список
Период
Сортировка
От Kenji Morishige
Тема optimizing db for small table with tons of updates
Дата
Msg-id 20060403182403.GA12257@juniper.net
обсуждение исходный текст
Ответы Re: optimizing db for small table with tons of updates  (Josh Berkus <josh@agliodbs.com>)
Re: optimizing db for small table with tons of updates  ("Rajesh Kumar Mallah" <mallah.rajesh@gmail.com>)
Re: optimizing db for small table with tons of updates  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-performance
I am using postgresql to be the central database for a variety of tools for
our testing infrastructure. We have web tools and CLI tools that require access
to machine configuration and other states for automation.  We have one tool that
uses a table that looks like this:

systest_live=# \d cuty
                Table "public.cuty"
   Column    |           Type           | Modifiers
-------------+--------------------------+-----------
 resource_id | integer                  | not null
 lock_start  | timestamp with time zone |
 lock_by     | character varying(12)    |
 frozen      | timestamp with time zone |
 freeze_end  | timestamp with time zone |
 freeze_by   | character varying(12)    |
 state       | character varying(15)    |
Indexes:
    "cuty_pkey" PRIMARY KEY, btree (resource_id)
    "cuty_main_idx" btree (resource_id, lock_start)
Foreign-key constraints:
    "cuty_resource_id_fkey" FOREIGN KEY (resource_id) REFERENCES resource(resource_id) ON UPDATE CASCADE ON DELETE
CASCADE

Various users run a tool that updates this table to determine if the particular
resource is available or not.  Within a course of a few days, this table can
be updated up to 200,000 times.  There are only about 3500 records in this
table, but the update and select queries against this table start to slow
down considerablly after a few days.  Ideally, this table doesn't even need
to be stored and written to the filesystem.  After I run a vacuum against this
table, the overall database performance seems to rise again.  When database
is running with recent vacuum the average server load is about .40, but after
this table is updated 200,000+ times, the server load can go up to 5.0.

here is a typical update query:
2006-04-03 10:53:39 PDT testtool systest_live kyoto.englab.juniper.net(4888) LOG:  duration: 2263.741 ms  statement:
UPDATEcuty SET 
         lock_start = NOW(),
         lock_by = 'tlim'
        WHERE resource_id='2262' and (lock_start IS NULL OR lock_start < (NOW() - interval '3600 second'))

We used to use MySQL for these tools and we never had any issues, but I believe
it is due to the transactional nature of Postgres that is adding an overhead
to this problem.  Are there any table options that enables the table contents
to be maintained in ram only or have delayed writes for this particular table?

Thanks in advance,
Kenji

В списке pgsql-performance по дате отправления:

Предыдущее
От: Jerry Sievers
Дата:
Сообщение: Re: Measuring the execution time of functions within functions...
Следующее
От: Josh Berkus
Дата:
Сообщение: Re: optimizing db for small table with tons of updates