Обсуждение: Postgres low end processing.

Поиск
Список
Период
Сортировка

Postgres low end processing.

От
Stef
Дата:
Hi everyone,

I've been trying to find out if some guidelines
exist, somewhere, describing how postgres
can possibly run on less than 8MB of RAM.
(Disk space not an issue).

The closest thread I could find in the list
archives is :
http://archives.postgresql.org/pgsql-general/2002-06/msg01343.php

Is it possible to have a stripped-down version of
postgres that will use an absolute minimal amount
of memory?

Maybe by switching off some features/options
at compile time, and/or configuration tweaks?
(Or anything else)

This will be on very low end i386 architecture.
Performance penalties are expected and
will be accepted. I will need the
functionality of >= 7.3.4 , at least.

Any help will be much appreciated.

Regards
Stef



Вложения

Re: Postgres low end processing.

От
Tom Lane
Дата:
Stef <svb@ucs.co.za> writes:
> I've been trying to find out if some guidelines
> exist, somewhere, describing how postgres
> can possibly run on less than 8MB of RAM.

Are you sure you want Postgres, and not something smaller?  BDB,
or SQL Lite, for example?

"Postgres is bloatware by design: it was built to house PhD theses."
-- J. Hellerstein (who ought to know)

But having said that ... given virtual memory and cramped configuration
settings, Postgres would certainly run in an 8M machine.  Maybe "crawl"
would be a more applicable verb than "run", but you could execute it.

            regards, tom lane

Re: Postgres low end processing.

От
Stef
Дата:
On Fri, 03 Oct 2003 11:42:54 -0400
Tom Lane <tgl@sss.pgh.pa.us> wrote:

=> Are you sure you want Postgres, and not something smaller?  BDB,
=> or SQL Lite, for example?
I have considered various options, including BDB and SQL Lite, but
alas, it will have to be postgres if it's going to be a database. Otherwise
it will be back to the original idea of flat .idx files :(

=> "Postgres is bloatware by design: it was built to house PhD theses."
=> -- J. Hellerstein (who ought to know)
 :o)  Believe me, I've been amazed since I encountered postgres v6.3.2
in '98

=> But having said that ... given virtual memory and cramped configuration
=> settings, Postgres would certainly run in an 8M machine.  Maybe "crawl"
=> would be a more applicable verb than "run", but you could execute it.

Crawling is ok. Won't differ much from normal operation on a machine like that.
Any  tips on how to achieve the most diminutive vmem an conf settings?
I tried to figure this out from the docs, and played around with
backend/storage , but I'm not really winning.

Regards
Stef

Вложения

Re: Postgres low end processing.

От
Tom Lane
Дата:
Stef <svb@ucs.co.za> writes:
> Crawling is ok. Won't differ much from normal operation on a machine
> like that.  Any tips on how to achieve the most diminutive vmem an
> conf settings?

The out-of-the-box settings are already pretty diminutive on current
releases :-(.  In 7.4 you'd likely want to knock back shared_buffers
and max_connections, and maybe the fsm settings if the database is going
to be tiny.

> I tried to figure this out from the docs, and played
> around with backend/storage , but I'm not really winning.

What exactly is failing?  And what's the platform, anyway?

            regards, tom lane

Re: Postgres low end processing.

От
Stef
Дата:
On Fri, 03 Oct 2003 12:32:00 -0400
Tom Lane <tgl@sss.pgh.pa.us> wrote:

=> What exactly is failing?  And what's the platform, anyway?

Nothing is really failing atm, except the funds for better
hardware. JBOSS and some other servers need to be
run on these machines, along with linux, which will be
a minimal RH >= 7.2 with kernel 2.4.21
(Any better suggestions here?)

In this case, whatever is the least amount of memory
postgres can run on, is what is needed. So this is still
a kind of feasibility study. Of course, it will still be thoroughly
tested, if it turns out to be possible. (Which I know it is, but not how)

Regards
Stef

Вложения

Re: Postgres low end processing.

От
Josh Berkus
Дата:
Stef,

> I've been trying to find out if some guidelines
> exist, somewhere, describing how postgres
> can possibly run on less than 8MB of RAM.
> (Disk space not an issue).

I can tell you from experience that you will get some odd behaviour, and even
connection failures, when Postgres is forced into swap by lack of memory.
Also, you will run into trouble with the default Linux kernel 2.4 VM manager,
which allows applications to overcommit memory; either hack your own memory
manager, or designate swap space >= 200% of RAM.

Also, program your application to expect, and recover from, PostgreSQL
failures and connection failures.

> Is it possible to have a stripped-down version of
> postgres that will use an absolute minimal amount
> of memory?

I don;t know that there is anything you can remove safely from the postgres
core that would save you memory.

> Maybe by switching off some features/options
> at compile time, and/or configuration tweaks?
> (Or anything else)

You're in luck; the default postgresql.conf file for 7.3 is actually cofigured
for a low-memory, slow machine setup (which most other people bitch about).
Here's a few other things you can do:

1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the
database files, either through mounting or symlinking.

2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very
frequently, like every 1-5 minutes.  Spend some time tuning your
fsm_max_pages to the ideal level so that you're not allocating any extra
memory to the FSM.

3. If your concern is *average* CPU/RAM consumption, and not peak load
activity, increase wal_files and checkpoint_segments to do more efficient
batch processing of pending updates as the cost of some disk space.  If peak
load activity is a problem, don't do this.

4. Tune all of your queries carefully to avoid anything requiring a
RAM-intensive merge join or CPU-eating calculated expression hash join, or
similar computation-or-RAM-intensive operations.

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: Postgres low end processing.

От
Ron Johnson
Дата:
On Fri, 2003-10-03 at 12:52, Stef wrote:
> On Fri, 03 Oct 2003 12:32:00 -0400
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> => What exactly is failing?  And what's the platform, anyway?
>
> Nothing is really failing atm, except the funds for better
> hardware. JBOSS and some other servers need to be
> run on these machines, along with linux, which will be
> a minimal RH >= 7.2 with kernel 2.4.21
> (Any better suggestions here?)
>
> In this case, whatever is the least amount of memory
> postgres can run on, is what is needed. So this is still
> a kind of feasibility study. Of course, it will still be thoroughly
> tested, if it turns out to be possible. (Which I know it is, but not how)

JBOSS, PostgreSQL & 2.4.21 all on a computer w/ 8MB RAM?  A 486 or
*very* low end Pentium?

It'll thrash (in the literal sense) the page files.  *No* work
will get done.

--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA

"...always eager to extend a friendly claw"


Re: Postgres low end processing.

От
Neil Conway
Дата:
On Fri, 2003-10-03 at 14:08, Josh Berkus wrote:
> I can tell you from experience that you will get some odd behaviour, and even
> connection failures, when Postgres is forced into swap by lack of memory.

Why would you get a connection failure? And other than poor performance,
why would you get "odd behavior" due to a lack of physical memory?

-Neil



Re: Postgres low end processing.

От
"scott.marlowe"
Дата:
On Fri, 3 Oct 2003, Ron Johnson wrote:

> On Fri, 2003-10-03 at 12:52, Stef wrote:
> > On Fri, 03 Oct 2003 12:32:00 -0400
> > Tom Lane <tgl@sss.pgh.pa.us> wrote:
> >
> > => What exactly is failing?  And what's the platform, anyway?
> >
> > Nothing is really failing atm, except the funds for better
> > hardware. JBOSS and some other servers need to be
> > run on these machines, along with linux, which will be
> > a minimal RH >= 7.2 with kernel 2.4.21
> > (Any better suggestions here?)
> >
> > In this case, whatever is the least amount of memory
> > postgres can run on, is what is needed. So this is still
> > a kind of feasibility study. Of course, it will still be thoroughly
> > tested, if it turns out to be possible. (Which I know it is, but not how)
>
> JBOSS, PostgreSQL & 2.4.21 all on a computer w/ 8MB RAM?  A 486 or
> *very* low end Pentium?
>
> It'll thrash (in the literal sense) the page files.  *No* work
> will get done.

I built a test server four years ago on a P100 with 64 Megs of RAM and it
was already a pretty slow / old box at that time.

Considering that those kind of beasts sell by the pound nowadays, I can't
imagine torturing yourself by using a 486 with 8 megs of ram.  Even my
ancient 486DX50 Toshiba 4700 has 16 Megs of ram in it.

IF ons has to develop in such a low end environment you're much better
off either writing perl CGI or using PHP, which both use much less memory
than JBoss.

I don't think I'd try to run JBoss / Postgresql on anything less than 64
or 128 Meg of RAM.  Even then you're probably looking at having a fair bit
of swapping going on.


Re: Postgres low end processing.

От
Richard Welty
Дата:
On Fri, 03 Oct 2003 11:42:54 -0400 Tom Lane <tgl@sss.pgh.pa.us> wrote:
> "Postgres is bloatware by design: it was built to house PhD theses."
> -- J. Hellerstein (who ought to know)

if postgres is bloatware, what is oracle 9i?

(after i downloaded a copy of oracle 8i a couple of months back, i swore i'd
never complain about the size of postgresql ever ever again.)

richard
--
Richard Welty                                         rwelty@averillpark.net
Averill Park Networking                                         518-573-7592
    Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security

Re: Postgres low end processing.

От
Shridhar Daithankar
Дата:
Stef wrote:

> On Fri, 03 Oct 2003 12:32:00 -0400
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> => What exactly is failing?  And what's the platform, anyway?
>
> Nothing is really failing atm, except the funds for better
> hardware. JBOSS and some other servers need to be
> run on these machines, along with linux, which will be
> a minimal RH >= 7.2 with kernel 2.4.21
> (Any better suggestions here?)
>
> In this case, whatever is the least amount of memory
> postgres can run on, is what is needed. So this is still
> a kind of feasibility study. Of course, it will still be thoroughly
> tested, if it turns out to be possible. (Which I know it is, but not how)

If you mean to say that postgresql should use just 8 MB of RAM rather than
running it on a 8MB machine, then that is impossible given how much postgresql
relies upon OS cache.

You may configure postgresql with 8MB shared memory or the old holy default of
512K, but if your database is 100MB and OS is caching half of it on behalf of
postgresql, your goal is already missed..

  Shridhar


Re: Postgres low end processing.

От
Stef
Дата:
Thanks for the replies,

On Fri, 3 Oct 2003 11:08:48 -0700
Josh Berkus <josh@agliodbs.com> wrote:
=> 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the
=> database files, either through mounting or symlinking.

I'm not sure I understand how this helps?

=> 2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very
=> frequently, like every 1-5 minutes.  Spend some time tuning your
=> fsm_max_pages to the ideal level so that you're not allocating any extra
=> memory to the FSM.
=>
=> 3. If your concern is *average* CPU/RAM consumption, and not peak load
=> activity, increase wal_files and checkpoint_segments to do more efficient
=> batch processing of pending updates as the cost of some disk space.  If peak
=> load activity is a problem, don't do this.
=>
=> 4. Tune all of your queries carefully to avoid anything requiring a
=> RAM-intensive merge join or CPU-eating calculated expression hash join, or
=> similar computation-or-RAM-intensive operations.

Thanks, I'll try some of these, and post the results.
The actual machines seem to be Pentium I machines,
with 32M RAM. I've gathered that it is theoretically
possible, so no to go try it.

Regards
Stef

Вложения

Re: Postgres low end processing.

От
Bruno Wolff III
Дата:
On Mon, Oct 06, 2003 at 09:55:51 +0200,
  Stef <svb@ucs.co.za> wrote:
>
> Thanks, I'll try some of these, and post the results.
> The actual machines seem to be Pentium I machines,
> with 32M RAM. I've gathered that it is theoretically
> possible, so no to go try it.

I am running 7.4beta2 on a Pentium I machine with 48 MB of memory.
I was running an earlier version of Postgres (probably 7.1.x) on it
when it only had 32 MB of memory. It doesn't run very fast, but it
works OK. I remember increase from 32MB to 48MB was very noticible in
the time to serve web pages using information from the DB, but I
don't remember the details since it was a couple of years ago.

Re: Postgres low end processing.

От
Josh Berkus
Дата:
Stef,

> => 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the
> => database files, either through mounting or symlinking.
>
> I'm not sure I understand how this helps?

It gives you better fsync write performance on a low-end disk setup.
Otherwise, the disk is forced to do a hop-back-and-forth between the database
and the xlog, resulting in much slower updates and thus the database tying up
blocks of RAM longer -- particularly if your shared_buffers are set very low,
which they will be.

On RAID setups, this is unnecessary becuase the RAID takes care of disk access
management.  But on a low-end, 2-IDE-disk machine, you have to do it.

--
-Josh Berkus
 Aglio Database Solutions
 San Francisco


Re: Postgres low end processing.

От
Stef
Дата:
Hi again all,

I've tested postgres 7.3.4 on Linux version 2.4.17
and this is what I found :

The initial instance took up 8372K and this fluctuated
between +- 8372K  and 10372K, plus +- 3500K for
every connection.

I did quite a few transactions on both connections, plus
a few vacuums and a pg_dump and the total memory usage
didn't seem to go over 16M

I set all the _buffers, _mem, _fsm settings to the minimum,
restarted every time, but this had absolutely no noticeable
increase or decrease in total memory usage.

(I used a program called gmemusage to get these stats.)

On the same machine , I tested postgres 7.1.2 with basically
the same conf options (not _fsm) and got the following :

The initial instance was 1772K and fluctuated to +- 4000K,
plus +- 3400K for every connection.

Doing the same transactions, vacuum + pg_dump, total
memory usage didn't really go over 11M,
which was exactly what I needed.

Although I've lived through some of the shortcomings of
7.1.2, it is still very stable, and works perfectly for
what it is going to be used for.

Again, here, I was only able to restrict things a little
by changing the configuration options, but no major
difference in memory usage.

Regards
Stef

On Mon, 6 Oct 2003 09:55:51 +0200
Stef <svb@ucs.co.za> wrote:

=> Thanks for the replies,
=>
=> On Fri, 3 Oct 2003 11:08:48 -0700
=> Josh Berkus <josh@agliodbs.com> wrote:
=> => 1. Make sure that the WAL files (pg_xlog) are on a seperate disk from the
=> => database files, either through mounting or symlinking.
=>
=> I'm not sure I understand how this helps?
=>
=> => 2. Tweak the .conf file for low vacuum_mem (1024?), but vacuum very
=> => frequently, like every 1-5 minutes.  Spend some time tuning your
=> => fsm_max_pages to the ideal level so that you're not allocating any extra
=> => memory to the FSM.
=> =>
=> => 3. If your concern is *average* CPU/RAM consumption, and not peak load
=> => activity, increase wal_files and checkpoint_segments to do more efficient
=> => batch processing of pending updates as the cost of some disk space.  If peak
=> => load activity is a problem, don't do this.
=> =>
=> => 4. Tune all of your queries carefully to avoid anything requiring a
=> => RAM-intensive merge join or CPU-eating calculated expression hash join, or
=> => similar computation-or-RAM-intensive operations.
=>
=> Thanks, I'll try some of these, and post the results.
=> The actual machines seem to be Pentium I machines,
=> with 32M RAM. I've gathered that it is theoretically
=> possible, so no to go try it.
=>
=> Regards
=> Stef
=>

Вложения

Re: Postgres low end processing.

От
Jeff
Дата:
On Tue, 7 Oct 2003, Stef wrote:

> The initial instance took up 8372K and this fluctuated
> between +- 8372K  and 10372K, plus +- 3500K for
> every connection.
>

Does that include/exlude the size of say, shared code & libraries?
I know linux does copy-on-write forking.. so it may be less in reality...

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/