Inserts in 'big' table slowing down the database

Поиск
Список
Период
Сортировка
От Stefan Keller
Тема Inserts in 'big' table slowing down the database
Дата
Msg-id CAFcOn2_W6v_vqwomCf6DEtXk=N8iB5WU-7XEdcYgX-VFUE32+Q@mail.gmail.com
обсуждение исходный текст
Ответы Re: Inserts in 'big' table slowing down the database  (Ivan Voras <ivoras@freebsd.org>)
Список pgsql-performance
Hi,

I'm having performance issues with a simple table containing 'Nodes'
(points) from OpenStreetMap:

  CREATE TABLE nodes (
      id bigint PRIMARY KEY,
      user_name text NOT NULL,
      tstamp timestamp without time zone NOT NULL,
      geom GEOMETRY(POINT, 4326)
  );
  CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);

The number of rows grows steadily and soon reaches one billion
(1'000'000'000), therefore the bigint id.
Now, hourly inserts (update and deletes) are slowing down the database
(PostgreSQL 9.1) constantly.
Before I'm looking at non-durable settings [1] I'd like to know what
choices I have to tune it while keeping the database productive:
cluster index? partition table? use tablespaces? reduce physical block size?

Stefan

[1] http://www.postgresql.org/docs/9.1/static/non-durability.html


В списке pgsql-performance по дате отправления:

Предыдущее
От: Jayadevan M
Дата:
Сообщение: Re: Execution from java - slow
Следующее
От: Ivan Voras
Дата:
Сообщение: Re: Inserts in 'big' table slowing down the database