Обсуждение: out of memory - no sort

Поиск
Список
Период
Сортировка

out of memory - no sort

От
Don
Дата:
I am trying a simple
        access of a table and get an out of
          memory error.  How do I avoid this issue.  It seems I
        have some configuration set wrong.

        Our system has 24GB of memory and is dedicated to the postgres
        database.

        Back ground information

        aquarec=> explain analyze verbose select * from ens_memb;

                                                       
QUERY

PLAN                                                       


--------------------------------------------------------------------------------------------------------------------------
         Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212
        width=62) (actual time=4.954..37513.377 rows=32216154 loops=1)
           Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate,
        source, tyr, val
         Total runtime: 39588.386 ms


#------------------------------------------------------------------------------
        # RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------

        # - Memory -

        shared_buffers =
6144MB                # min 128kB 

                                       
#(change requires 
        restart)
        #temp_buffers =
8MB                    
#min 800kB 
        max_prepared_transactions = 5           # zero disables the
        feature

                                       
#(change requires 
        restart)
        # Note:  Increasing max_prepared_transactions costs ~600 bytes
        of shared memory
        # per transaction slot, plus lock space (see
        max_locks_per_transaction).
        # It is not advisable to set max_prepared_transactions nonzero
        unless you
        # actively intend to use prepared transactions.
        work_mem =
48MB                        
#min 64kB 
        maintenance_work_mem = 256MB            # min 1MB
        #max_stack_depth =
2MB                 # min 100kB 

        # - Kernel Resource Usage -

        #max_files_per_process = 1000           # min 25

                                       
#(change requires 
        restart)
        #shared_preload_libraries = ''          # (change requires
        restart)

        # - Cost-Based Vacuum Delay -

        #vacuum_cost_delay =
0ms               # 0-100 milliseconds 
        #vacuum_cost_page_hit = 1               #
0-10000credits 
        #vacuum_cost_page_miss = 10             # 0-10000
credits
        #vacuum_cost_page_dirty = 20            # 0-10000
credits
        #vacuum_cost_limit =
200               # 1-10000 credits 

        # - Background Writer -

        #bgwriter_delay =
200ms                # 10-10000ms
between
        rounds
        #bgwriter_lru_maxpages = 100            # 0-1000 max
buffers
        written/round
        #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on
        buffers scanned/round

        # - Asynchronous Behavior -

        #effective_io_concurrency = 1           # 1-1000. 0 disables
        prefetching


#------------------------------------------------------------------------------
        # WRITE AHEAD LOG
#------------------------------------------------------------------------------

        # - Settings -

        #fsync =
on                            
#turns forced 
        synchronization on or off
        #synchronous_commit =
on               # immediate fsync at 
        commit
        #wal_sync_method =
fsync               # the default is the 
        first option

                                       
#supported by the 
        operating system:

                                       
#  open_datasync 

                                       
#  fdatasync 

                                       
#  fsync 

                                       
#  fsync_writethrough 

                                       
#  open_sync 
        #full_page_writes =
on                 # recover from
partial
        page writes
        wal_buffers =
8MB                      
#min 32kB 

                                       
#(change requires 
        restart)
        #wal_writer_delay = 200ms               #
1-10000milliseconds 

        #commit_delay =
0                      
#range 0-100000, in 
        microseconds
        #commit_siblings =
5                   #
range1-1000 

        # - Checkpoints -

        checkpoint_segments =
32               # in logfile segments, 
        min 1, 16MB each
        #checkpoint_timeout = 5min              #
range30s-1h 
        #checkpoint_completion_target = 0.5     # checkpoint target
        duration, 0.0 - 1.0
        #checkpoint_warning = 30s               #
0disables 

        # - Archiving -

        #archive_mode = off             # allows archiving
tobe done 

                               
#(change requires restart) 
        #archive_command = ''           # command to use to archive a
        logfile segment
        #archive_timeout = 0            # force a logfile
segmentswitch 
        after this

                               
#number of seconds; 0 disables 

#------------------------------------------------------------------------------
        # QUERY TUNING
#------------------------------------------------------------------------------

        # - Planner Method Configuration -

        #enable_bitmapscan = on
        #enable_hashagg = on
        #enable_hashjoin = on
        #enable_indexscan = on
        #enable_mergejoin = on
        #enable_nestloop = on
        #enable_seqscan = on
        #enable_sort = on
        #enable_tidscan = on

        # - Planner Cost Constants -

        #seq_page_cost =
1.0                   #
measuredon an 
        arbitrary scale
        random_page_cost =
3.0                 # same scale
asabove 
        #cpu_tuple_cost =
0.01                 # same scale
asabove 
        #cpu_index_tuple_cost = 0.005           # same scale as above
        #cpu_operator_cost = 0.0025             # same
scaleas above 
        effective_cache_size = 12288MB

        # - Genetic Query Optimizer -

        #geqo = on
        #geqo_threshold = 12
        #geqo_effort =
5                       
#range 1-10 
        #geqo_pool_size =
0                    
#selects default based 
        on effort
        #geqo_generations =
0                  # selects
defaultbased 
        on effort
        #geqo_selection_bias = 2.0              #
range1.5-2.0 

        # - Other Planner Options -

        default_statistics_target = 100         # range 1-10000
        #constraint_exclusion = partition       # on, off, or partition
        #cursor_tuple_fraction = 0.1            # range 0.0-1.0
        #from_collapse_limit = 8
        #join_collapse_limit =
8               # 1 disables collapsing 
        of explicit

                                       
#JOIN clauses 

Re: out of memory - no sort

От
Pavel Stehule
Дата:
Hello

if table is large, then client can raise this exception too

try to set FETCH_COUNT to 1000

http://www.postgresql.org/docs/8.4/interactive/app-psql.html

Regards

Pavel Stehule

2011/8/30 Don <Donald.Laurine@noaa.gov>:
> I am trying a simple access of a table and get an out of memory error.  How
> do I avoid this issue.  It seems I have some configuration set wrong.
>
> Our system has 24GB of memory and is dedicated to the postgres database.
>
> Back ground information
>
> aquarec=> explain analyze verbose select * from ens_memb;
>                                                         QUERY
> PLAN
>
--------------------------------------------------------------------------------------------------------------------------
>  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
> time=4.954..37513.377 rows=32216154 loops=1)
>    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
>  Total runtime: 39588.386 ms
>
>
> #------------------------------------------------------------------------------
> # RESOURCE USAGE (except WAL)
> #------------------------------------------------------------------------------
>
> # - Memory -
>
> shared_buffers = 6144MB                 # min 128kB
>                                         # (change requires restart)
> #temp_buffers = 8MB                     # min 800kB
> max_prepared_transactions = 5           # zero disables the feature
>                                         # (change requires restart)
> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> # It is not advisable to set max_prepared_transactions nonzero unless you
> # actively intend to use prepared transactions.
> work_mem = 48MB                         # min 64kB
> maintenance_work_mem = 256MB            # min 1MB
> #max_stack_depth = 2MB                  # min 100kB
>
> # - Kernel Resource Usage -
>
> #max_files_per_process = 1000           # min 25
>                                         # (change requires restart)
> #shared_preload_libraries = ''          # (change requires restart)
>
> # - Cost-Based Vacuum Delay -
>
> #vacuum_cost_delay = 0ms                # 0-100 milliseconds
> #vacuum_cost_page_hit = 1               # 0-10000 credits
> #vacuum_cost_page_miss = 10             # 0-10000 credits
> #vacuum_cost_page_dirty = 20            # 0-10000 credits
> #vacuum_cost_limit = 200                # 1-10000 credits
>
> # - Background Writer -
>
> #bgwriter_delay = 200ms                 # 10-10000ms between rounds
> #bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round
> #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers
> scanned/round
>
> # - Asynchronous Behavior -
>
> #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching
>
>
> #------------------------------------------------------------------------------
> # WRITE AHEAD LOG
> #------------------------------------------------------------------------------
>
> # - Settings -
>
> #fsync = on                             # turns forced synchronization on or
> off
> #synchronous_commit = on                # immediate fsync at commit
> #wal_sync_method = fsync                # the default is the first option
>                                         # supported by the operating system:
>                                         #   open_datasync
>                                         #   fdatasync
>                                         #   fsync
>                                         #   fsync_writethrough
>                                         #   open_sync
> #full_page_writes = on                  # recover from partial page writes
> wal_buffers = 8MB                       # min 32kB
>                                         # (change requires restart)
> #wal_writer_delay = 200ms               # 1-10000 milliseconds
>
> #commit_delay = 0                       # range 0-100000, in microseconds
> #commit_siblings = 5                    # range 1-1000
>
> # - Checkpoints -
>
> checkpoint_segments = 32                # in logfile segments, min 1, 16MB
> each
> #checkpoint_timeout = 5min              # range 30s-1h
> #checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0 -
> 1.0
> #checkpoint_warning = 30s               # 0 disables
>
> # - Archiving -
>
> #archive_mode = off             # allows archiving to be done
>                                 # (change requires restart)
> #archive_command = ''           # command to use to archive a logfile
> segment
> #archive_timeout = 0            # force a logfile segment switch after this
>                                 # number of seconds; 0 disables
>
> #------------------------------------------------------------------------------
> # QUERY TUNING
> #------------------------------------------------------------------------------
>
> # - Planner Method Configuration -
>
> #enable_bitmapscan = on
> #enable_hashagg = on
> #enable_hashjoin = on
> #enable_indexscan = on
> #enable_mergejoin = on
> #enable_nestloop = on
> #enable_seqscan = on
> #enable_sort = on
> #enable_tidscan = on
>
> # - Planner Cost Constants -
>
> #seq_page_cost = 1.0                    # measured on an arbitrary scale
> random_page_cost = 3.0                  # same scale as above
> #cpu_tuple_cost = 0.01                  # same scale as above
> #cpu_index_tuple_cost = 0.005           # same scale as above
> #cpu_operator_cost = 0.0025             # same scale as above
> effective_cache_size = 12288MB
>
> # - Genetic Query Optimizer -
>
> #geqo = on
> #geqo_threshold = 12
> #geqo_effort = 5                        # range 1-10
> #geqo_pool_size = 0                     # selects default based on effort
> #geqo_generations = 0                   # selects default based on effort
> #geqo_selection_bias = 2.0              # range 1.5-2.0
>
> # - Other Planner Options -
>
> default_statistics_target = 100         # range 1-10000
> #constraint_exclusion = partition       # on, off, or partition
> #cursor_tuple_fraction = 0.1            # range 0.0-1.0
> #from_collapse_limit = 8
> #join_collapse_limit = 8                # 1 disables collapsing of explicit
>                                         # JOIN clauses
>

Re: out of memory - no sort

От
John R Pierce
Дата:
On 08/30/11 7:28 AM, Don wrote:
> I am trying a simple access of a table and get an out of memory
> error.  How do I avoid this issue.  It seems I have some configuration
> set wrong.
>
> Our system has 24GB of memory and is dedicated to the postgres database.
>
> Back ground information
>
> aquarec=> explain analyze verbose select * from ens_memb;
>                                                         QUERY PLAN
>
--------------------------------------------------------------------------------------------------------------------------
>  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62)
> (actual time=4.954..37513.377 rows=32216154 loops=1)
>    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source,
> tyr, val
>  Total runtime: 39588.386 ms

wild guess (since you didn't show the error), your system doesn't have
enough memory available to store all 32 million rows of your result
set.  This could be happening on the CLIENT ('psql') side or the server
side (and in fact, if both are on the same system, I believe that query
as written will require two copies of the result set in memory)


--
john r pierce                            N 37, W 122
santa cruz ca                         mid-left coast


Re: out of memory - no sort

От
Don
Дата:
Pavel...

Thanks for the reply...

This still did not solve the issue.  It seems odd that a simple select
command in psql accessing 32MB of records should cause a problem.  I
have tables much larger than this and may want to access them the same way.

I have 24 GB RAM on the sever and 32GB RAM on the client machine. Both
machines are 64bit.

Thanks Don


On 8/30/2011 10:25 AM, Pavel Stehule wrote:
> Hello
>
> if table is large, then client can raise this exception too
>
> try to set FETCH_COUNT to 1000
>
> http://www.postgresql.org/docs/8.4/interactive/app-psql.html
>
> Regards
>
> Pavel Stehule
>
> 2011/8/30 Don<Donald.Laurine@noaa.gov>:
>> I am trying a simple access of a table and get an out of memory error.  How
>> do I avoid this issue.  It seems I have some configuration set wrong.
>>
>> Our system has 24GB of memory and is dedicated to the postgres database.
>>
>> Back ground information
>>
>> aquarec=>  explain analyze verbose select * from ens_memb;
>>                                                          QUERY
>> PLAN
>>
--------------------------------------------------------------------------------------------------------------------------
>>   Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62) (actual
>> time=4.954..37513.377 rows=32216154 loops=1)
>>     Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr, val
>>   Total runtime: 39588.386 ms
>>
>>
>> #------------------------------------------------------------------------------
>> # RESOURCE USAGE (except WAL)
>> #------------------------------------------------------------------------------
>>
>> # - Memory -
>>
>> shared_buffers = 6144MB                 # min 128kB
>>                                          # (change requires restart)
>> #temp_buffers = 8MB                     # min 800kB
>> max_prepared_transactions = 5           # zero disables the feature
>>                                          # (change requires restart)
>> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
>> memory
>> # per transaction slot, plus lock space (see max_locks_per_transaction).
>> # It is not advisable to set max_prepared_transactions nonzero unless you
>> # actively intend to use prepared transactions.
>> work_mem = 48MB                         # min 64kB
>> maintenance_work_mem = 256MB            # min 1MB
>> #max_stack_depth = 2MB                  # min 100kB
>>
>> # - Kernel Resource Usage -
>>
>> #max_files_per_process = 1000           # min 25
>>                                          # (change requires restart)
>> #shared_preload_libraries = ''          # (change requires restart)
>>
>> # - Cost-Based Vacuum Delay -
>>
>> #vacuum_cost_delay = 0ms                # 0-100 milliseconds
>> #vacuum_cost_page_hit = 1               # 0-10000 credits
>> #vacuum_cost_page_miss = 10             # 0-10000 credits
>> #vacuum_cost_page_dirty = 20            # 0-10000 credits
>> #vacuum_cost_limit = 200                # 1-10000 credits
>>
>> # - Background Writer -
>>
>> #bgwriter_delay = 200ms                 # 10-10000ms between rounds
>> #bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round
>> #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers
>> scanned/round
>>
>> # - Asynchronous Behavior -
>>
>> #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching
>>
>>
>> #------------------------------------------------------------------------------
>> # WRITE AHEAD LOG
>> #------------------------------------------------------------------------------
>>
>> # - Settings -
>>
>> #fsync = on                             # turns forced synchronization on or
>> off
>> #synchronous_commit = on                # immediate fsync at commit
>> #wal_sync_method = fsync                # the default is the first option
>>                                          # supported by the operating system:
>>                                          #   open_datasync
>>                                          #   fdatasync
>>                                          #   fsync
>>                                          #   fsync_writethrough
>>                                          #   open_sync
>> #full_page_writes = on                  # recover from partial page writes
>> wal_buffers = 8MB                       # min 32kB
>>                                          # (change requires restart)
>> #wal_writer_delay = 200ms               # 1-10000 milliseconds
>>
>> #commit_delay = 0                       # range 0-100000, in microseconds
>> #commit_siblings = 5                    # range 1-1000
>>
>> # - Checkpoints -
>>
>> checkpoint_segments = 32                # in logfile segments, min 1, 16MB
>> each
>> #checkpoint_timeout = 5min              # range 30s-1h
>> #checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0 -
>> 1.0
>> #checkpoint_warning = 30s               # 0 disables
>>
>> # - Archiving -
>>
>> #archive_mode = off             # allows archiving to be done
>>                                  # (change requires restart)
>> #archive_command = ''           # command to use to archive a logfile
>> segment
>> #archive_timeout = 0            # force a logfile segment switch after this
>>                                  # number of seconds; 0 disables
>>
>> #------------------------------------------------------------------------------
>> # QUERY TUNING
>> #------------------------------------------------------------------------------
>>
>> # - Planner Method Configuration -
>>
>> #enable_bitmapscan = on
>> #enable_hashagg = on
>> #enable_hashjoin = on
>> #enable_indexscan = on
>> #enable_mergejoin = on
>> #enable_nestloop = on
>> #enable_seqscan = on
>> #enable_sort = on
>> #enable_tidscan = on
>>
>> # - Planner Cost Constants -
>>
>> #seq_page_cost = 1.0                    # measured on an arbitrary scale
>> random_page_cost = 3.0                  # same scale as above
>> #cpu_tuple_cost = 0.01                  # same scale as above
>> #cpu_index_tuple_cost = 0.005           # same scale as above
>> #cpu_operator_cost = 0.0025             # same scale as above
>> effective_cache_size = 12288MB
>>
>> # - Genetic Query Optimizer -
>>
>> #geqo = on
>> #geqo_threshold = 12
>> #geqo_effort = 5                        # range 1-10
>> #geqo_pool_size = 0                     # selects default based on effort
>> #geqo_generations = 0                   # selects default based on effort
>> #geqo_selection_bias = 2.0              # range 1.5-2.0
>>
>> # - Other Planner Options -
>>
>> default_statistics_target = 100         # range 1-10000
>> #constraint_exclusion = partition       # on, off, or partition
>> #cursor_tuple_fraction = 0.1            # range 0.0-1.0
>> #from_collapse_limit = 8
>> #join_collapse_limit = 8                # 1 disables collapsing of explicit
>>                                          # JOIN clauses
>>


Re: out of memory - no sort

От
Scott Ribe
Дата:
On Aug 31, 2011, at 9:51 AM, Don wrote:

> Both machines are 64bit.

Are all your server & client builds 64-bit?

32M rows, unless the rows are <50 bytes each, you'll never be able to manipulate that selection in memory with a 32-bit
app.

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: out of memory - no sort

От
Pavel Stehule
Дата:
Hello

2011/8/31 Don <Donald.Laurine@noaa.gov>:
> Pavel...
>
> Thanks for the reply...
>
> This still did not solve the issue.  It seems odd that a simple select
> command in psql accessing 32MB of records should cause a problem.  I have
> tables much larger than this and may want to access them the same way.
>

so there are two possibilities

a) broken datafiles
b) PostgreSQL's bug

Pavel

> I have 24 GB RAM on the sever and 32GB RAM on the client machine. Both
> machines are 64bit.
>
> Thanks Don
>
>
> On 8/30/2011 10:25 AM, Pavel Stehule wrote:
>>
>> Hello
>>
>> if table is large, then client can raise this exception too
>>
>> try to set FETCH_COUNT to 1000
>>
>> http://www.postgresql.org/docs/8.4/interactive/app-psql.html
>>
>> Regards
>>
>> Pavel Stehule
>>
>> 2011/8/30 Don<Donald.Laurine@noaa.gov>:
>>>
>>> I am trying a simple access of a table and get an out of memory error.
>>>  How
>>> do I avoid this issue.  It seems I have some configuration set wrong.
>>>
>>> Our system has 24GB of memory and is dedicated to the postgres database.
>>>
>>> Back ground information
>>>
>>> aquarec=>  explain analyze verbose select * from ens_memb;
>>>                                                         QUERY
>>> PLAN
>>>
>>>
--------------------------------------------------------------------------------------------------------------------------
>>>  Seq Scan on ens_memb  (cost=0.00..719893.12 rows=32216212 width=62)
>>> (actual
>>> time=4.954..37513.377 rows=32216154 loops=1)
>>>    Output: id, shefpc, bwmon, ewmon, pb, tb, cdate, vdate, source, tyr,
>>> val
>>>  Total runtime: 39588.386 ms
>>>
>>>
>>>
>>> #------------------------------------------------------------------------------
>>> # RESOURCE USAGE (except WAL)
>>>
>>> #------------------------------------------------------------------------------
>>>
>>> # - Memory -
>>>
>>> shared_buffers = 6144MB                 # min 128kB
>>>                                         # (change requires restart)
>>> #temp_buffers = 8MB                     # min 800kB
>>> max_prepared_transactions = 5           # zero disables the feature
>>>                                         # (change requires restart)
>>> # Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
>>> memory
>>> # per transaction slot, plus lock space (see max_locks_per_transaction).
>>> # It is not advisable to set max_prepared_transactions nonzero unless you
>>> # actively intend to use prepared transactions.
>>> work_mem = 48MB                         # min 64kB
>>> maintenance_work_mem = 256MB            # min 1MB
>>> #max_stack_depth = 2MB                  # min 100kB
>>>
>>> # - Kernel Resource Usage -
>>>
>>> #max_files_per_process = 1000           # min 25
>>>                                         # (change requires restart)
>>> #shared_preload_libraries = ''          # (change requires restart)
>>>
>>> # - Cost-Based Vacuum Delay -
>>>
>>> #vacuum_cost_delay = 0ms                # 0-100 milliseconds
>>> #vacuum_cost_page_hit = 1               # 0-10000 credits
>>> #vacuum_cost_page_miss = 10             # 0-10000 credits
>>> #vacuum_cost_page_dirty = 20            # 0-10000 credits
>>> #vacuum_cost_limit = 200                # 1-10000 credits
>>>
>>> # - Background Writer -
>>>
>>> #bgwriter_delay = 200ms                 # 10-10000ms between rounds
>>> #bgwriter_lru_maxpages = 100            # 0-1000 max buffers
>>> written/round
>>> #bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers
>>> scanned/round
>>>
>>> # - Asynchronous Behavior -
>>>
>>> #effective_io_concurrency = 1           # 1-1000. 0 disables prefetching
>>>
>>>
>>>
>>> #------------------------------------------------------------------------------
>>> # WRITE AHEAD LOG
>>>
>>> #------------------------------------------------------------------------------
>>>
>>> # - Settings -
>>>
>>> #fsync = on                             # turns forced synchronization on
>>> or
>>> off
>>> #synchronous_commit = on                # immediate fsync at commit
>>> #wal_sync_method = fsync                # the default is the first option
>>>                                         # supported by the operating
>>> system:
>>>                                         #   open_datasync
>>>                                         #   fdatasync
>>>                                         #   fsync
>>>                                         #   fsync_writethrough
>>>                                         #   open_sync
>>> #full_page_writes = on                  # recover from partial page
>>> writes
>>> wal_buffers = 8MB                       # min 32kB
>>>                                         # (change requires restart)
>>> #wal_writer_delay = 200ms               # 1-10000 milliseconds
>>>
>>> #commit_delay = 0                       # range 0-100000, in microseconds
>>> #commit_siblings = 5                    # range 1-1000
>>>
>>> # - Checkpoints -
>>>
>>> checkpoint_segments = 32                # in logfile segments, min 1,
>>> 16MB
>>> each
>>> #checkpoint_timeout = 5min              # range 30s-1h
>>> #checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0
>>> -
>>> 1.0
>>> #checkpoint_warning = 30s               # 0 disables
>>>
>>> # - Archiving -
>>>
>>> #archive_mode = off             # allows archiving to be done
>>>                                 # (change requires restart)
>>> #archive_command = ''           # command to use to archive a logfile
>>> segment
>>> #archive_timeout = 0            # force a logfile segment switch after
>>> this
>>>                                 # number of seconds; 0 disables
>>>
>>>
>>> #------------------------------------------------------------------------------
>>> # QUERY TUNING
>>>
>>> #------------------------------------------------------------------------------
>>>
>>> # - Planner Method Configuration -
>>>
>>> #enable_bitmapscan = on
>>> #enable_hashagg = on
>>> #enable_hashjoin = on
>>> #enable_indexscan = on
>>> #enable_mergejoin = on
>>> #enable_nestloop = on
>>> #enable_seqscan = on
>>> #enable_sort = on
>>> #enable_tidscan = on
>>>
>>> # - Planner Cost Constants -
>>>
>>> #seq_page_cost = 1.0                    # measured on an arbitrary scale
>>> random_page_cost = 3.0                  # same scale as above
>>> #cpu_tuple_cost = 0.01                  # same scale as above
>>> #cpu_index_tuple_cost = 0.005           # same scale as above
>>> #cpu_operator_cost = 0.0025             # same scale as above
>>> effective_cache_size = 12288MB
>>>
>>> # - Genetic Query Optimizer -
>>>
>>> #geqo = on
>>> #geqo_threshold = 12
>>> #geqo_effort = 5                        # range 1-10
>>> #geqo_pool_size = 0                     # selects default based on effort
>>> #geqo_generations = 0                   # selects default based on effort
>>> #geqo_selection_bias = 2.0              # range 1.5-2.0
>>>
>>> # - Other Planner Options -
>>>
>>> default_statistics_target = 100         # range 1-10000
>>> #constraint_exclusion = partition       # on, off, or partition
>>> #cursor_tuple_fraction = 0.1            # range 0.0-1.0
>>> #from_collapse_limit = 8
>>> #join_collapse_limit = 8                # 1 disables collapsing of
>>> explicit
>>>                                         # JOIN clauses
>>>
>
>

Re: out of memory - no sort

От
Don
Дата:
<meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">


    The server is 64 bit and client is 32 bit...  I tried the select
      * from table on the server and the query worked...
    but I am puzzled why it does not work on the 32bit machine.  I had
    always thought that a 32bit machine could access up to 4GB.
    So what is the limiting factor ?



    On 8/31/2011 8:57 AM, Scott Ribe wrote:
    <blockquote
      cite="mid:AD443F0B-D050-4E79-B277-AF9A5DB9AF06@elevated-dev.com"
      type="cite">
      On Aug 31, 2011, at 9:51 AM, Don wrote:



        Both machines are 64bit.



Are all your server & client builds 64-bit?

32M rows, unless the rows are <50 bytes each, you'll never be able to manipulate that selection in memory with a
32-bitapp. 

Re: out of memory - no sort

От
Scott Ribe
Дата:
On Aug 31, 2011, at 10:52 AM, Don wrote:

> I had always thought that a 32bit machine could access up to 4GB.
> So what is the limiting factor ?

- Half of your memory space may be given over to memory-mapped I/O. Now you're down to 2GB.

- Your process's executable, plus any libraries it uses, plus all the system libraries that they touch, recursively all
theway down, are mapped into this space. Now you're likely down to 1.5GB or less free. 

- Then of course your process allocates various data structures for each row, even if it's just a huge array of
pointersto each row, that would be overhead. And of course the overhead is not nearly that simple--there will be
allocationsfor & pointers to varchars, and info about columns and data types, and heap data structures to keep track of
allocatedvs free blocks. 

- Memory will be fragmented of course, so you can't even use all of what's left.

So no, you can't manipulate 32M of anything except plain numbers or very simple structs in RAM in a 32-bit process.

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice