Обсуждение: Pushing PostgreSQL to the Limit (urgent!)

Поиск
Список
Период
Сортировка

Pushing PostgreSQL to the Limit (urgent!)

От
Paulo Henrique Baptista de Oliveira
Дата:
    Hi all,
    I read Postgresql limits at postgresql.org:

   Maximum size for a database          unlimited (60GB databases exist)
   Maximum size for a table             64 TB on all operating systems
   Maximum size for a row               unlimited in 7.1 and later
   Maximum size for a field             1GB in 7.1 and later
   Maximum number of rows in a table    unlimited
   Maximum number of columns in a table 1600
   Maximum number of indexes on a table unlimited

    I have an app that uses a database with 60 tables and a mean of 5 fields in each table.
    I will put it to insert 30 M (millions) registers by month (or 1 Million by day) in a year is about 400 Millions
registers.
    Can pgsql support this? In What Machine?
    TIA,        Paulo Henrique.


--
Paulo Henrique B de Oliveira
Gerente de Operações - Linux Solutions - http://www.linuxsolutions.com.br
O maior conteúdo de Linux em língua portuguesa - OLinux - http://www.olinux.com.br
(21) 2526-7262 ramal 31

memory

От
"David Blood"
Дата:
Anyone use postgres on hardware with more than 4gb ram?  Can you use
more memory than that with freebsd?

David Blood
Matraex, Inc






Re: Pushing PostgreSQL to the Limit (urgent!)

От
Chris Albertson
Дата:
--- Paulo Henrique Baptista de Oliveira
<baptista@linuxsolutions.com.br> wrote:

>     I will put it to insert 30 M (millions) registers by month (or 1
> Million by day) in a year is about 400 Millions registers.
>     Can pgsql support this? In What Machine?

I have a similar application.  I am storing astronomical data
from a set of automated cameras.  The data just floods in
forever.  I can see a billion rows in the future.
I find that I _can_ keep up using only modest hardware IF I use
"COPY" and not "INSERT" to input the data.  "COPY" is much, much
faster.  Also indexes help with SELECT speed not really hurt
COPY/INSERT speed so you need to ballance.  1 Million per
24 hour period is not to much for a "standard" PC   BUT if you
also want to do many SELECTS or JOINS at the same time then
you will need more so the hardware depends on what _else_ you
are doing.

=====
Chris Albertson
  Home:   310-376-1029  chrisalbertson90278@yahoo.com
  Cell:   310-990-7550
  Office: 310-336-5189  Christopher.J.Albertson@aero.org

__________________________________________________
Do You Yahoo!?
Yahoo! Autos - Get free new car price quotes
http://autos.yahoo.com

Re: Pushing PostgreSQL to the Limit (urgent!)

От
Curt Sampson
Дата:
> --- Paulo Henrique Baptista de Oliveira
> <baptista@linuxsolutions.com.br> wrote:
>
> >     I will put it to insert 30 M (millions) registers by month (or 1
> > Million by day) in a year is about 400 Millions registers.
> >     Can pgsql support this? In What Machine?

Yes. A reasonably powerful PC with at least two nice fast IDE drives
should do the trick. I recommend you buy such a machine, set up
postgres, and start experimenting. It will probably take a couple
of weeks of work to figure out how to make your application run
efficiently.

On Mon, 15 Jul 2002, Chris Albertson wrote:
>
> I have a similar application.  I am storing astronomical data
> from a set of automated cameras.  The data just floods in
> forever.  I can see a billion rows in the future.
> I find that I _can_ keep up using only modest hardware IF I use
> "COPY" and not "INSERT" to input the data.  "COPY" is much, much
> faster.  Also indexes help with SELECT speed not really hurt
> COPY/INSERT speed so you need to ballance.

Right. You may find it worthwhile to drop the indexes, import, and rebuild
instead of import with the indexes in place, if you're not doing queries
at the same time. Or maybe partial indexes could help....

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC


Re: memory

От
Andrew Sullivan
Дата:
On Mon, Jul 15, 2002 at 01:09:59PM -0600, David Blood wrote:
> Anyone use postgres on hardware with more than 4gb ram?  Can you use
> more memory than that with freebsd?

We use PostgreSQL on 16-gig Sun machines.  But they're 64 bit
processors, and therefore have no trouble seeing more than 4 Gig.

Any 32-bit x86-based system is going to have some kind of evil hack
lurking around the 4 Gig mark.  It's just an inherent limitation of a
32-bit address space.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: Pushing PostgreSQL to the Limit (urgent!)

От
Chris Albertson
Дата:
--- Curt Sampson <cjs@cynic.net> wrote:

> On Mon, 15 Jul 2002, Chris Albertson wrote:
> >
> > I have a similar application.  I am storing astronomical data
> > from a set of automated cameras.  The data just floods in
> > forever.  I can see a billion rows in the future.
> > I find that I _can_ keep up using only modest hardware IF I use
> > "COPY" and not "INSERT" to input the data.  "COPY" is much, much
> > faster.  Also indexes help with SELECT speed not really hurt
> > COPY/INSERT speed so you need to ballance.
>
> Right. You may find it worthwhile to drop the indexes, import, and
> rebuild
> instead of import with the indexes in place, if you're not doing
> queries
> at the same time. Or maybe partial indexes could help....

I took that route on one application.  Not sure if it was a smart move.
The trouble is that yes, the import goes faster but then you have to
re-build the dropped index.  Can take a long time.  So the database
is not usable untill the time to import PLUS the time to index.
Leaving the index in place means both are done at once.

Maybe the best thing would be to use a multi-CPU machine and do the
inport using several processes.  In the Intel world dual CPU
systems are cost effective.  But I agree experiment. but you _must_
us full size tables.  I made the mistake once of prototyping using
small cacheable tables.  It did not scale to larger 1E7 size tables.

>
> cjs
> --
> Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974
> http://www.netbsd.org
>     Don't you know, in this new Dark Age, we're all light.  --XTC
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly


=====
Chris Albertson
  Home:   310-376-1029  chrisalbertson90278@yahoo.com
  Cell:   310-990-7550
  Office: 310-336-5189  Christopher.J.Albertson@aero.org

__________________________________________________
Do You Yahoo!?
Yahoo! Autos - Get free new car price quotes
http://autos.yahoo.com

Re: memory

От
Curt Sampson
Дата:
On Tue, 16 Jul 2002, Andrew Sullivan wrote:

> Any 32-bit x86-based system is going to have some kind of evil hack
> lurking around the 4 Gig mark.  It's just an inherent limitation of a
> 32-bit address space.

Well, it's going to be well under 4 gig for Unix; 2 gig or 3 gig is more
typical, since part of every process' address space is dedicated to
kernel use.

However, when it comes to postgres you're not likely to see any evil
hacks unless you want to dedicated more than a couple of gig to
postgres' shared memory buffers. The OS's cached disk blocks need not
all be mapped at the same time, and each process has a separate address
space, so the total memory used by cached disk blocks and all of the
processes in the system can be well over 4 GB without any problems at all.

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC