Read performance on Large Table

Поиск
Список
Период
Сортировка
Hello Admin,

I am looking to build a database to log activities on my system

Currently I have the following  table:

CREATE TABLE table
(
  attr1 bigint,
  attr2 text,
  attr3 timestamp without time zone,
  attr4 integer,
  attr5 character varying(50),
  attr6 character varying(50),
  attr7 character varying(50),
  attr8 character varying(50),
  attr9 character varying(50),
  attr10 character varying(50)
)

Over 3 months the table has grown to 600+ Millions Rows,

We mainly do inserts, we almost never do updates or delete.

However the table is not very useful as it stands  because any select  we run it takes a very long time ( in hours) to complete.


Question1: Is Slowness to be expected with such big table?
Question2: is there anything I am not doing that I should do to get a better perfomance?

I am thinking about 2 solution but I wanted to Ask the more experienced people before implementing them: 

1: Break the table into multiple small table for each week.
2: Create an index on the Timestamp column sin it is the one we use the most in queries.

Thank you in advance for you Help,

--Kido K.


Note:
 explain analyse select count(*) from table where attr5='value' and attr3 > '05/19/2015 00:00:00' and attr3 < '05/21/2015 00:00:00';

Has been running for 30 minutes and counting....

В списке pgsql-admin по дате отправления:

Предыдущее
От: Thomas SIMON
Дата:
Сообщение: Re: Performances issues with SSD volume ?
Следующее
От: "Gilberto Castillo"
Дата:
Сообщение: Re: Read performance on Large Table