Realistic upper bounds on table size

Поиск
Список
Период
Сортировка
От A.M.
Тема Realistic upper bounds on table size
Дата
Msg-id D10A4775-613F-11D7-9333-0030657192DA@cmu.edu
обсуждение исходный текст
Ответы Re: Realistic upper bounds on table size  (Robert Treat <xzilla@users.sourceforge.net>)
Список pgsql-admin
Sorry for the cross-posting but I wasn't able to elicit a response on
-general.

I'm trying to figure out what the upper bounds on a postgresql table
are based on required use of indices and integer columns in a single
table.
          An astronomy institution I'm considering working for receives
a  monster amount of telescope data from a government observatory. Each
day, they download millions of rows of data (including position in the
sky, infrared reading, etc.) in CSV format. Most of the rows are floats
and integers. I would like to offer them an improvement over their old
system.
          I would like to know how postgresql does under such extreme
circumstances- for example, I may load the entire millions of rows CSV
file into a table and then eliminate some odd million rows they are not
  interested in. Would a vacuum at this time be prohibitively expensive?
  If I add some odd millions of rows to a table every day, can I expect
the necessary indices to keep up? In other words, will postgresql be
able to keep up with their simple and infrequent selects on monster
amounts of data (potentially 15 GB/day moving in and out daily with db
growth at ~+5 GB/day [millions of rows] in big blocks all at once)
assuming that they have top-of-the-line equipment for this sort of
thing (storage, memory, processors, etc.)? Anyone else using postgresql
  on heavy-duty astronomy data? Thanks for any info.

 ><><><><><><><><><
AgentM
agentm@cmu.edu


В списке pgsql-admin по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: About sorting rows randomly
Следующее
От: "Vilson farias"
Дата:
Сообщение: backend closed the channel unexpectedly everytime!