Mark kirkwood <markir@slingshot.co.nz> writes:
> I would appreciate any suggestions on how to plan for this growth.
> - Clearly hardware will need to be looked at (can we cope continuing to
> use Intel based platform(s)? )
> - Also software, are we okay using Postgresql for a 300G database (is
> 7.2 aimed at this sized undertaking?), How about the os ( can we keep
> using Linux or Freebsd ?).
7.2 is aimed more at 24x7 operation; as far as size of database goes,
I wouldn't think it would be much better than 7.1. Probably the major
issue for you is that PG doesn't have any provision for spreading
a database across multiple filesystems --- so you will have to put the
whole DB on one honkin' big RAID array, or use an LVM layer to spread
the filesystem across multiple drives in software.
I seem to recall having heard that Linux has some ~100GB restriction on
the size of a single filesystem, which could be a problem. That might
be obsolete information, but be sure to check max filesystem size for
whichever kernel you select.
> I am informed that the expected number of users is low, so that major
> challenges are big queries and the weekly loads. Most of the space used
> by the database will be in 1 very big and 1 big table.
PG's hard limit on the size of a single table is 2 billion pages, which
is 16TB with the default page size and 64TB if you compile with BLCKSZ
set to 32K. The nonstandard choice might be a good option for this DB.
The prospect of having to dump this database for backup seems a tad
daunting. What are you using for a backup device?
regards, tom lane