Re: Postgresql system requirements to support large

Поиск
Список
Период
Сортировка
От scott.marlowe
Тема Re: Postgresql system requirements to support large
Дата
Msg-id Pine.LNX.4.33.0404201725330.20246-100000@css120.ihs.com
обсуждение исходный текст
Ответ на Re: Postgresql system requirements to support large  (Bob.Henkel@hartfordlife.com)
Список pgsql-general
On Tue, 20 Apr 2004 Bob.Henkel@hartfordlife.com wrote:

>
>
>
>
>
> I just want a general idea of what Postgresql can handle. I know the guru's
> will say it depends on many different things, but in general what can this
> bad boy handle?

A lot.  There are terabyte databases running on postgresql

> 50gb to 100gb is by no means small.  But how does Postgresql 7.4 handle
> database of  900G, or 1 Terabyte or greater?
>  How does Postgresql handle a table with100 columns of integers and
> varchar2(400) data types with 1 million rows,10 million, 100 million 500
> million,greater then 1 billion joined to a small lookup table of 50000 rows
> with both tables indexed properely?  Can this database handle enterprise
> quanities of data or is it geared towards the small to medium data?

Databases designed with 100 columns of integers and varchar2(400) columns
are generally not well enough designed to deserve the moniker "Enterprise"
class.

Properly normalized and indexed, postgresql is capable of handling most
loads quite well.

If you want guarantees, you'll get none.  It's your job to test it for the
load you will be putting it under to see if it works.

However, the real advantage PostgreSQL has isn't that it can handle poorly
designed databases well, it's that should you find yourself in a corner
case that hasn't been explored yet, and find a performance problem, you
can talk directly to the developers and get help / patches from them and
be an active part of the process of making PostgreSQL a better database
while receiving better support than most commercial products provide.

The biggest limiter for handling large data sets isn't going to be
PostgreSQL or Oracle, but your system hardware.  Running a 1 terabyte
database on a P100 with 32 megs of ram on an IDE - software RAID array
with write caching turned off is gonna be a lot slower than the same
database on an 8 way opteron with 64 gigs of RAM and 4 battery backed RAID
controllers with hundreds of hard drives under it.

There are no internal design limitations that will prevent you from
handling large data sets though.  Only bottlenecks that haven't been found
and fixed yet.  :-)


В списке pgsql-general по дате отправления:

Предыдущее
От: Bill Moran
Дата:
Сообщение: Re: Vb databound
Следующее
От: "Joe Stump"
Дата:
Сообщение: Re: Downgrading from Postgresql 7.4 to 7.1