Обсуждение: Performance on SUSE w/ reiserfs
I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source. Over time, I see the memory usage of the box go way way up (it's got 8GBs in it and by the end of the day, it'll be all used up) with what looks like cached inodes relating to the extreme IO generated by postgres. We replicate about 10GBs of data every day from our AS/400 into postgres, and it is the main database for our intranet portal, which will server 40,000 pages on a good day. I was wondering if there is something I'm doing wrong with my default settings of postgres that is keeping all that stuff cached, or if I just need to switch to XFS or if there is some setting in postgres that I can tweak that will make this problem go away. It's gone beyond an annoyance and is now into the realm of getting me in trouble if I can't keep this DB server up and running. Even a minute or two of downtime in a restart is often too much. Any help you can give in this would be extrememly helpful as I'm very far from an expert on Linux filesystems and postgres tuning. Thanks! -- Jon Brisbin Webmaster NPC International, Inc.
> I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source. > Over time, I see the memory usage of the box go way way up (it's got > 8GBs in it and by the end of the day, it'll be all used up) with what > looks like cached inodes relating to the extreme IO generated by > > I was wondering if there is something I'm doing wrong with my default > settings of postgres that is keeping all that stuff cached, or if I just > need to switch to XFS or if there is some setting in postgres that I can > tweak that will make this problem go away. It's gone beyond an annoyance > and is now into the realm of getting me in trouble if I can't keep this > DB server up and running. Even a minute or two of downtime in a restart > is often too much. > > Any help you can give in this would be extrememly helpful as I'm very > far from an expert on Linux filesystems and postgres tuning. You may want to submit your postgresql.conf. Upgrading to the latest stable version may also help, although my experience is related to FreeBSD and postgresql 7.4.8. regards Claus
Jon Brisbin <jon.brisbin@npcinternational.com> writes: > I have a SUSE 9 box that is running Postgres 8.0.1 compiled from source. > Over time, I see the memory usage of the box go way way up (it's got > 8GBs in it and by the end of the day, it'll be all used up) with what > looks like cached inodes relating to the extreme IO generated by > postgres. We replicate about 10GBs of data every day from our AS/400 > into postgres, and it is the main database for our intranet portal, > which will server 40,000 pages on a good day. Are you sure it's not cached data pages, rather than cached inodes? If so, the above behavior is *good*. People often have a mistaken notion that having near-zero free RAM means they have a problem. In point of fact, that is the way it is supposed to be (at least on Unix-like systems). This is just a reflection of the kernel doing what it is supposed to do, which is to use all spare RAM for caching recently accessed disk pages. If you're not swapping then you do not have a problem. You should be looking at swap I/O rates (see vmstat or iostat) to determine if you have memory pressure, not "free RAM". regards, tom lane
Jon, > Any help you can give in this would be extrememly helpful as I'm very > far from an expert on Linux filesystems and postgres tuning. See Tom's response; it may be that you don't have an issue at all. If you do, it's probably the kernel, not the FS. 2.6.8 and a few other 2.6.single-digit kernels had memory leaks in shmem that would cause gradually escalating swappage. The solution to that one is to upgrade to 2.6.11. -- --Josh Josh Berkus Aglio Database Solutions San Francisco
More info: apps:/home/jbrisbin # free -mo total used free shared buffers cached Mem: 8116 5078 3038 0 92 4330 Swap: 1031 0 1031 apps:/home/jbrisbin # cat /proc/meminfo MemTotal: 8311188 kB MemFree: 3111668 kB Buffers: 94604 kB Cached: 4434764 kB SwapCached: 0 kB Active: 4844344 kB Inactive: 279556 kB HighTotal: 7469996 kB HighFree: 2430976 kB LowTotal: 841192 kB LowFree: 680692 kB SwapTotal: 1056124 kB SwapFree: 1056124 kB Dirty: 436 kB Writeback: 0 kB Mapped: 581924 kB Slab: 48264 kB Committed_AS: 651128 kB PageTables: 4020 kB VmallocTotal: 112632 kB VmallocUsed: 13104 kB VmallocChunk: 97284 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB apps:/home/jbrisbin # cat /proc/slabinfo slabinfo - version: 2.0 ... reiser_inode_cache 28121 28140 512 7 1 : tunables ... radix_tree_node 28092 28154 276 14 1 : tunables ... inode_cache 1502 1520 384 10 1 : tunables dentry_cache 40763 40794 152 26 1 : tunables ... buffer_head 83929 94643 52 71 1 : tunables Claus Guttesen wrote: > You may want to submit your postgresql.conf. Upgrading to the latest > stable version may also help, although my experience is related to > FreeBSD and postgresql 7.4.8. # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the postmaster. # # Any option can also be given as a command line switch to the # postmaster, e.g. 'postmaster -c log_connections=on'. Some options # can be changed at run-time with the 'SET' SQL command. # # This file is read on postmaster startup and when the postmaster # receives a SIGHUP. If you edit the file on a running system, you have # to SIGHUP the postmaster for the changes to take effect, or use # "pg_ctl reload". Some settings, such as listen_address, require # a postmaster shutdown and restart to take effect. #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. # data_directory = 'ConfigDir' # use data in another directory # hba_file = 'ConfigDir/pg_hba.conf' # the host-based authentication file # ident_file = 'ConfigDir/pg_ident.conf' # the IDENT configuration file # If external_pid_file is not explicitly set, no extra pid file is written. # external_pid_file = '(none)' # write an extra pid file #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - listen_addresses = '*' # what IP interface(s) to listen on; # defaults to localhost, '*' = any #port = 5432 max_connections = 100 # note: increasing max_connections costs about 500 bytes of shared # memory per connection slot, in addition to costs from shared_buffers # and max_locks_per_transaction. #superuser_reserved_connections = 2 #unix_socket_directory = '' #unix_socket_group = '' #unix_socket_permissions = 0777 # octal #rendezvous_name = '' # defaults to the computer name # - Security & Authentication - #authentication_timeout = 60 # 1-600, in seconds #ssl = false #password_encryption = true #krb_server_keyfile = '' #db_user_namespace = false #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each #work_mem = 1024 # min 64, size in KB #maintenance_work_mem = 16384 # min 1024, size in KB #max_stack_depth = 2048 # min 100, size in KB # - Free Space Map - #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each #max_fsm_relations = 1000 # min 100, ~50 bytes each # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 #preload_libraries = '' # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200 # 10-10000 milliseconds between rounds #bgwriter_percent = 1 # 0-100% of dirty buffers in each round #bgwriter_maxpages = 100 # 0-1000 buffers max per round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - #fsync = true # turns forced synchronization on or off #wal_sync_method = fsync # the default varies across platforms: # fsync, fdatasync, open_sync, or open_datasync #wal_buffers = 8 # min 4, 8KB each #commit_delay = 0 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 300 # range 30-3600, in seconds #checkpoint_warning = 30 # 0 is off, in seconds # - Archiving - #archive_command = '' # command to use to archive a logfile segment #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_hashagg = true #enable_hashjoin = true #enable_indexscan = true #enable_mergejoin = true #enable_nestloop = true #enable_seqscan = true #enable_sort = true #enable_tidscan = true # - Planner Cost Constants - #effective_cache_size = 1000 # typically 8KB each #random_page_cost = 4 # units are one sequential page fetch cost #cpu_tuple_cost = 0.01 # (same) #cpu_index_tuple_cost = 0.001 # (same) #cpu_operator_cost = 0.0025 # (same) # - Genetic Query Optimizer - #geqo = true #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'syslog' # Valid values are combinations of stderr, # syslog and eventlog, depending on # platform. # This is relevant when logging to stderr: redirect_stderr = true # Enable capturing of stderr into log files. # These are only relevant if redirect_stderr is true: log_directory = 'pg_log' # Directory where log files are written. # May be specified absolute or relative to PGDATA log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern. # May include strftime() escapes #log_truncate_on_rotation = false # If true, any existing log file of the # same name as the new log file will be truncated # rather than appended to. But such truncation # only occurs on time-driven rotation, # not on restarts or size-driven rotation. # Default is false, meaning append to existing # files in all cases. #log_rotation_age = 1440 # Automatic rotation of logfiles will happen after # so many minutes. 0 to disable. #log_rotation_size = 10240 # Automatic rotation of logfiles will happen after # so many kilobytes of log output. 0 to disable. # These are relevant when logging to syslog: syslog_facility = 'LOCAL0' syslog_ident = 'postgres' # - When to Log - client_min_messages = notice # Values, in order of decreasing detail: # debug5, debug4, debug3, debug2, debug1, # log, notice, warning, error log_min_messages = notice # Values, in order of decreasing detail: # debug5, debug4, debug3, debug2, debug1, # info, notice, warning, error, log, fatal, # panic log_error_verbosity = default # terse, default, or verbose messages log_min_error_statement = notice # Values in order of increasing severity: # debug5, debug4, debug3, debug2, debug1, # info, notice, warning, error, panic(off) #log_min_duration_statement = -1 # -1 is disabled, in milliseconds. #silent_mode = false # DO NOT USE without syslog or redirect_stderr # - What to Log - #debug_print_parse = false #debug_print_rewritten = false #debug_print_plan = false #debug_pretty_print = false #log_connections = false #log_disconnections = false #log_duration = false #log_line_prefix = '' # e.g. '<%u%%%d> ' # %u=user name %d=database name # %r=remote host and port # %p=PID %t=timestamp %i=command tag # %c=session id %l=session line number # %s=session start timestamp %x=transaction id # %q=stop here in non-session processes # %%='%' #log_statement = 'none' # none, mod, ddl, all log_hostname = true #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Statistics Monitoring - #log_parser_stats = false #log_planner_stats = false #log_executor_stats = false #log_statement_stats = false # - Query/Index Statistics Collector - #stats_start_collector = true #stats_command_string = false #stats_block_level = false #stats_row_level = false #stats_reset_on_server_start = true #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '$user,public' # schema names #default_tablespace = '' # a tablespace name, or '' for default #check_function_bodies = true #default_transaction_isolation = 'read committed' #default_transaction_read_only = false #statement_timeout = 0 # 0 is disabled, in milliseconds # - Locale and Formatting - #datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ environment setting #australian_timezones = false #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = true #dynamic_library_path = '$libdir' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1000 # in milliseconds #max_locks_per_transaction = 64 # min 10, ~200*max_connections bytes each #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = true #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = true #default_with_oids = true # - Other Platforms & Clients - #transform_null_equals = false -- Jon Brisbin Webmaster NPC International, Inc.
Tom Lane wrote: > > Are you sure it's not cached data pages, rather than cached inodes? > If so, the above behavior is *good*. > > People often have a mistaken notion that having near-zero free RAM means > they have a problem. In point of fact, that is the way it is supposed > to be (at least on Unix-like systems). This is just a reflection of the > kernel doing what it is supposed to do, which is to use all spare RAM > for caching recently accessed disk pages. If you're not swapping then > you do not have a problem. Except for the fact that my Java App server crashes when all the available memory is being used by caching and not reclaimed :-) If it wasn't for the app server going down, I probably wouldn't care. -- Jon Brisbin Webmaster NPC International, Inc.
Jon Brisbin wrote: > Tom Lane wrote: > > > >Are you sure it's not cached data pages, rather than cached inodes? > >If so, the above behavior is *good*. > > > >People often have a mistaken notion that having near-zero free RAM means > >they have a problem. In point of fact, that is the way it is supposed > >to be (at least on Unix-like systems). This is just a reflection of the > >kernel doing what it is supposed to do, which is to use all spare RAM > >for caching recently accessed disk pages. If you're not swapping then > >you do not have a problem. > > Except for the fact that my Java App server crashes when all the > available memory is being used by caching and not reclaimed :-) Ah, so you have a different problem. What you should be asking is why the appserver crashes. You still seem to have a lot of free swap, judging by a nearby post. But maybe the problem is that the swap is completely used too, and so the OOM killer (is this Linux?) comes around and kills the appserver. Certainly the problem is not the caching. You should be monitoring when and why the appserver dies. -- Alvaro Herrera Architect, http://www.EnterpriseDB.com "On the other flipper, one wrong move and we're Fatal Exceptions" (T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)
Jon Brisbin <jon.brisbin@npcinternational.com> writes: > Tom Lane wrote: >> If you're not swapping then you do not have a problem. > Except for the fact that my Java App server crashes when all the > available memory is being used by caching and not reclaimed :-) That's a kernel bug (or possibly a Java bug ;-)). I concur with Josh's suggestion that you need a newer kernel. regards, tom lane
I have a postgresql 7.4.8-server with 4 GB ram. > #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each > #max_fsm_relations = 1000 # min 100, ~50 bytes each If you do a vacuum verbose (when it's convenient) the last couple of lines will tell you something like this: INFO: free space map: 143 relations, 62034 pages stored; 63792 total pages needed DETAIL: Allocated FSM size: 300 relations + 75000 pages = 473 kB shared memory. It says 143 relations and 63792 total pages needed, so I up'ed my values to these settings: max_fsm_relations = 300 # min 10, fsm is free space map, ~40 bytes max_fsm_pages = 75000 # min 1000, fsm is free space map, ~6 bytes > #effective_cache_size = 1000 # typically 8KB each This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I changed it to: effective_cache_size = 27462 # typically 8KB each Bear in mind that this is 7.4.8 and FreeBSD so these suggestions may not apply to your environment. These suggestions could be validated by the other members of this list. regards Claus
On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote: > I have a postgresql 7.4.8-server with 4 GB ram. <snip> > > > #effective_cache_size = 1000 # typically 8KB each > > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I > changed it to: > > effective_cache_size = 27462 # typically 8KB each Apparently this formula is no longer relevant on the FreeBSD systems as it can cache up to almost all the available RAM. With 4GB of RAM, one could specify most of the RAM as being available for caching, assuming that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM would be a reasonable value to tell the planner. (This was verified by using dd: dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create a 2G file then dd if=/usr/local/pgsql/iotest of=/dev/null If you run systat -vmstat 2 you will see 0% diskaccess during the read of the 2G file indicating that it has, in fact, been cached) Sven
Realise also that unless you are running the 1.5 x86-64 build, java will not use more than 1Gig, and if the app server requests more than 1gig, Java will die (I've been there) with an out of memory error, even though there is plenty of free mem available. This can easily be cause by a lazy GC thread if the applicaiton is running high on CPU usage.
The kernel will not report memory used for caching pages as being unavailable, if a program calls a malloc, the kernel will just swap out the oldest disk page and give the memory to the application.
Your free -mo shows 3 gig free even with cached disk pages. It looks to me more like either a Java problem, or a kernel problem...
Alex Turner
NetEconomist
The kernel will not report memory used for caching pages as being unavailable, if a program calls a malloc, the kernel will just swap out the oldest disk page and give the memory to the application.
Your free -mo shows 3 gig free even with cached disk pages. It looks to me more like either a Java problem, or a kernel problem...
Alex Turner
NetEconomist
On 10/10/05, Jon Brisbin <jon.brisbin@npcinternational.com> wrote:
Tom Lane wrote:
>
> Are you sure it's not cached data pages, rather than cached inodes?
> If so, the above behavior is *good*.
>
> People often have a mistaken notion that having near-zero free RAM means
> they have a problem. In point of fact, that is the way it is supposed
> to be (at least on Unix-like systems). This is just a reflection of the
> kernel doing what it is supposed to do, which is to use all spare RAM
> for caching recently accessed disk pages. If you're not swapping then
> you do not have a problem.
Except for the fact that my Java App server crashes when all the
available memory is being used by caching and not reclaimed :-)
If it wasn't for the app server going down, I probably wouldn't care.
--
Jon Brisbin
Webmaster
NPC International, Inc.
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
Alex Turner wrote: > Realise also that unless you are running the 1.5 x86-64 build, java > will not use more than 1Gig, and if the app server requests more than > 1gig, Java will die (I've been there) with an out of memory error, > even though there is plenty of free mem available. This can easily be > cause by a lazy GC thread if the applicaiton is running high on CPU usage. On my side of Planet Earth, the standard non-x64 1.5 JVM will happily use more than 1G of memory (on linux and Solaris, can't speak for Windows). If you're running larger programs, it's probably a good idea to use the -server compiler in the JVM as well. I regularly run with -Xmx1800m and regularly have >1GB heap sizes. The standard GC will not cause on OOM error if space remains for the requested object. The GC thread blocks all other threads during its activity, whatever else is happening on the machine. The newer/experimental GC's did have some potential race conditions, but I believe those have been resolved in the 1.5 JVMs. Finally, note that the latest _05 release of the 1.5 JVM also now supports large page sizes on Linux and Windows: -XX:+UseLargePages this can be quite beneficial depending on the memory patterns in your programs. -- Alan
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) but I was more thinking 1.4 which many folks are still using.
Alex
Alex
On 10/11/05, Alan Stange <stange@rentec.com> wrote:
Alex Turner wrote:
> Realise also that unless you are running the 1.5 x86-64 build, java
> will not use more than 1Gig, and if the app server requests more than
> 1gig, Java will die (I've been there) with an out of memory error,
> even though there is plenty of free mem available. This can easily be
> cause by a lazy GC thread if the applicaiton is running high on CPU usage.
On my side of Planet Earth, the standard non-x64 1.5 JVM will happily
use more than 1G of memory (on linux and Solaris, can't speak for
Windows). If you're running larger programs, it's probably a good idea
to use the -server compiler in the JVM as well. I regularly run with
-Xmx1800m and regularly have >1GB heap sizes.
The standard GC will not cause on OOM error if space remains for the
requested object. The GC thread blocks all other threads during its
activity, whatever else is happening on the machine. The
newer/experimental GC's did have some potential race conditions, but I
believe those have been resolved in the 1.5 JVMs.
Finally, note that the latest _05 release of the 1.5 JVM also now
supports large page sizes on Linux and Windows:
-XX:+UseLargePages this can be quite beneficial depending on the
memory patterns in your programs.
-- Alan
Alex Turner wrote: > Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) > but I was more thinking 1.4 which many folks are still using. The 1.4.x JVM's will also work just fine with much more than 1GB of memory. Perhaps you'd like to try again? -- Alan > > On 10/11/05, *Alan Stange* <stange@rentec.com > <mailto:stange@rentec.com>> wrote: > > Alex Turner wrote: > > > Realise also that unless you are running the 1.5 x86-64 build, java > > will not use more than 1Gig, and if the app server requests more > than > > 1gig, Java will die (I've been there) with an out of memory error, > > even though there is plenty of free mem available. This can > easily be > > cause by a lazy GC thread if the applicaiton is running high on > CPU usage. > > On my side of Planet Earth, the standard non-x64 1.5 JVM will happily > use more than 1G of memory (on linux and Solaris, can't speak for > Windows). If you're running larger programs, it's probably a good > idea > to use the -server compiler in the JVM as well. I regularly run with > -Xmx1800m and regularly have >1GB heap sizes. > > The standard GC will not cause on OOM error if space remains for the > requested object. The GC thread blocks all other threads during its > activity, whatever else is happening on the machine. The > newer/experimental GC's did have some potential race conditions, but I > believe those have been resolved in the 1.5 JVMs. > > Finally, note that the latest _05 release of the 1.5 JVM also now > supports large page sizes on Linux and Windows: > -XX:+UseLargePages this can be quite beneficial depending on the > memory patterns in your programs. > > -- Alan > >
Well - to each his own I guess - we did extensive testing on 1.4, and it refused to allocate much past 1gig on both Linux x86/x86-64 and Windows.
Alex
Alex
On 10/11/05, Alan Stange <stange@rentec.com> wrote:
Alex Turner wrote:
> Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
> but I was more thinking 1.4 which many folks are still using.
The 1.4.x JVM's will also work just fine with much more than 1GB of
memory. Perhaps you'd like to try again?
-- Alan
>
> On 10/11/05, *Alan Stange* <stange@rentec.com
> <mailto: stange@rentec.com>> wrote:
>
> Alex Turner wrote:
>
> > Realise also that unless you are running the 1.5 x86-64 build, java
> > will not use more than 1Gig, and if the app server requests more
> than
> > 1gig, Java will die (I've been there) with an out of memory error,
> > even though there is plenty of free mem available. This can
> easily be
> > cause by a lazy GC thread if the applicaiton is running high on
> CPU usage.
>
> On my side of Planet Earth, the standard non-x64 1.5 JVM will happily
> use more than 1G of memory (on linux and Solaris, can't speak for
> Windows). If you're running larger programs, it's probably a good
> idea
> to use the -server compiler in the JVM as well. I regularly run with
> -Xmx1800m and regularly have >1GB heap sizes.
>
> The standard GC will not cause on OOM error if space remains for the
> requested object. The GC thread blocks all other threads during its
> activity, whatever else is happening on the machine. The
> newer/experimental GC's did have some potential race conditions, but I
> believe those have been resolved in the 1.5 JVMs.
>
> Finally, note that the latest _05 release of the 1.5 JVM also now
> supports large page sizes on Linux and Windows:
> -XX:+UseLargePages this can be quite beneficial depending on the
> memory patterns in your programs.
>
> -- Alan
>
>