perf problem with huge table

Поиск
Список
Период
Сортировка

Hi all,

i am trying to move my app from M$sql to PGsql, but i need a bit of help :)


on M$sql, i had certain tables that was made as follow (sorry pseudo code)

contab_y
   date
   amt
   uid


contab_yd
  date
  amt
  uid

contab_ymd
  date
  amt
  uid


and so on..

this was used to "solidify" (aggregate..btw sorry for my terrible english) the data on it..

so basically, i get

contab_y
date = 2010
amt = 100
uid = 1

contab_ym
  date = 2010-01
  amt = 10
  uid = 1
----
  date = 2010-02
  amt = 90
  uid = 1


contab_ymd
   date=2010-01-01
   amt = 1
   uid = 1
----
blabla


in that way, when i need to do a query for a long ranges  (ie: 1 year) i just take the rows that are contained to
contab_y
if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i
cando some cool intersection between  
the different table using some huge (but fast) queries.


Now, the matter is that this design is hard to mantain, and the tables are difficult to check

what i have try is to go for a "normal" approach, using just a table that contains all the data, and some proper
indexing.
The issue is that this table can contains easilly 100M rows :)
that's why the other guys do all this work to speed-up queryes splitting data on different table and precalculating the
sums.


I am here to ask for an advice to PGsql experts:
what do you think i can do to better manage this situation?
there are some other cases where i can take a look at? maybe some documentation, or some technique that i don't know?
any advice is really appreciated!










В списке pgsql-performance по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: Re: 512,600ms query becomes 7500ms... but why? Postgres 8.3 query planner quirk?
Следующее
От: Justin Graf
Дата:
Сообщение: Re: perf problem with huge table