Обсуждение: OOM on large SELECT

Поиск
Список
Период
Сортировка

OOM on large SELECT

От
Angelo Nicolosi
Дата:
Hello,
I'm Angelo and I'm new in this list.
I'm a student in computer science and I'm working on a project with the postgres.
I wrote some C-Language functions, added on a database and a client, written in Java, has to use them.
There was the problem that if the table on which the client was working was larger than 10000 entries, the postgres was killed by the OOM. Surfing on internet for a solution I found that one of disable the Auto-commit when the client make the big SELECTs.
But the problem remain.
The dmesg command says:

Out of memory: Killed process 22592 (postgres).

And the only thing that the client outputs is: Killed.
On various web pages some people said that the problem could be a bug of the JDBC driver, because it doesn't really close the ResultSets. Is this true? There is some way to resolve the problem?
Thank you in advance for your help.

Cheers,
Angelo.


Una risposta istantanea? Usa Messenger da Hotmail

Re: OOM on large SELECT

От
Oliver Jowett
Дата:
Angelo Nicolosi wrote:

> The dmesg command says:
>
> Out of memory: Killed process 22592 (postgres).

If you are getting OOM-kills from the kernel, that usually means you
have system-wide memory exhaustion.

> I wrote some C-Language functions, added on a database [...]

Since the size of a JVM is roughly constant, I would guess that the
problem actually lies in your C code.

Alternatively your system just doesn't have enough virtual memory
configured to do whatever it is you're trying to do. Add more swap.

Either way, it doesn't look like a JDBC driver problem based on the
information you have given.

-O

Re: OOM on large SELECT

От
Angelo Nicolosi
Дата:
It's possible that the problem is in my C code but every time that I'm allocating memory, using always the palloc() function, I'm always calling the pfree().
There is some way to analyze the code meanwhile is working inside the Postgre server (something like valgrind)?
However the command free -m on my machine outputs:

             total       used       free     shared    buffers     cached
Mem:          2010        664       1345          0        157        383
-/+ buffers/cache:        123       1886
Swap:        16386         41      16345

I think that the swap is enough.
Could you give me some tips about how can I see where is the problem?
Thank you for your help!

Cheers,
Angelo.

> Date: Thu, 17 Sep 2009 03:14:31 +1200
> From: oliver@opencloud.com
> To: amenuor@hotmail.com
> CC: pgsql-jdbc@postgresql.org
> Subject: Re: [JDBC] OOM on large SELECT
>
> Angelo Nicolosi wrote:
>
> > The dmesg command says:
> >
> > Out of memory: Killed process 22592 (postgres).
>
> If you are getting OOM-kills from the kernel, that usually means you
> have system-wide memory exhaustion.
>
> > I wrote some C-Language functions, added on a database [...]
>
> Since the size of a JVM is roughly constant, I would guess that the
> problem actually lies in your C code.
>
> Alternatively your system just doesn't have enough virtual memory
> configured to do whatever it is you're trying to do. Add more swap.
>
> Either way, it doesn't look like a JDBC driver problem based on the
> information you have given.
>
> -O
>
> --
> Sent via pgsql-jdbc mailing list (pgsql-jdbc@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-jdbc


Un' immagine personale per Messenger? Crea il tuo Avatar!

Re: OOM on large SELECT

От
John R Pierce
Дата:
Angelo Nicolosi wrote:
> It's possible that the problem is in my C code but every time that I'm
> allocating memory, using always the palloc() function, I'm always
> calling the pfree().
> There is some way to analyze the code meanwhile is working inside the
> Postgre server (something like valgrind)?
> However the command free -m on my machine outputs:
>
>              total       used       free     shared    buffers     cached
> Mem:          2010        664       1345          0        157        383
> -/+ buffers/cache:        123       1886
> Swap:        16386         41      16345
>
> I think that the swap is enough.
> Could you give me some tips about how can I see where is the problem?
> Thank you for your help!

do you know what query you were making when you ran out of memory?   it
-appears- it was a postgres server process that was OOM'd.

what OS and version are you on (OOM seems to imply its likely linux,
since no other OS I'm familiar with would randomly kill processes like
that), what version postgres, etc ?

also, what are the various memory settings in your postgresql.conf
(shared_buffers, work_mem, etc)



Re: OOM on large SELECT

От
Oliver Jowett
Дата:
Angelo Nicolosi wrote:

> There is some way to analyze the code meanwhile is working inside the
> Postgre server (something like valgrind)?

You are probably better off asking on pgsql-hackers.

-O

Re: OOM on large SELECT

От
Angelo Nicolosi
Дата:
Sorry for the delay of this answer but i was trying to figure out.
However I saw that the memory that the postgres is using is getting larger step by step.
So it doesn't free it.
After the third query it is already full and one of the thread of the postgres is killed from the OOM.
When the process is killed the program usually is going to call again a stored function.
By the way the info that you required me are:

postgres (PostgreSQL) 8.4.0
Linux kernel 2.6.18 64bits

For the memory settings I have to contact the system admin because i don't have the rights, on that machines, to read the configurations file.
Thank you again to all for your help.
Cheers,
Angelo.

> Date: Wed, 16 Sep 2009 09:30:59 -0700
> From: pierce@hogranch.com
> To: amenuor@hotmail.com
> CC: pgsql-jdbc@postgresql.org
> Subject: Re: [JDBC] OOM on large SELECT
>
> Angelo Nicolosi wrote:
> > It's possible that the problem is in my C code but every time that I'm
> > allocating memory, using always the palloc() function, I'm always
> > calling the pfree().
> > There is some way to analyze the code meanwhile is working inside the
> > Postgre server (something like valgrind)?
> > However the command free -m on my machine outputs:
> >
> > total used free shared buffers cached
> > Mem: 2010 664 1345 0 157 383
> > -/+ buffers/cache: 123 1886
> > Swap: 16386 41 16345
> >
> > I think that the swap is enough.
> > Could you give me some tips about how can I see where is the problem?
> > Thank you for your help!
>
> do you know what query you were making when you ran out of memory? it
> -appears- it was a postgres server process that was OOM'd.
>
> what OS and version are you on (OOM seems to imply its likely linux,
> since no other OS I'm familiar with would randomly kill processes like
> that), what version postgres, etc ?
>
> also, what are the various memory settings in your postgresql.conf
> (shared_buffers, work_mem, etc)
>
>


Una risposta istantanea? Usa Messenger da Hotmail

Re: OOM on large SELECT

От
John R Pierce
Дата:
Angelo Nicolosi wrote:
> Sorry for the delay of this answer but i was trying to figure out.
> However I saw that the memory that the postgres is using is getting
> larger step by step.
> So it doesn't free it.
> After the third query it is already full and one of the thread of the
> postgres is killed from the OOM.
> When the process is killed the program usually is going to call again
> a stored function.
> By the way the info that you required me are:

I wonder if the work_mem settings in postgresql.conf are too large for
this computer?   if you're not the postgres administrator, you'll need
to work with them to identify this problem.

note, btw, 8.4.0 has been superceded by 8.4.1, with a lot of bug
fixes.   painless inplace upgrade.





Re: OOM on large SELECT

От
Hannu Krosing
Дата:
On Thu, 2009-09-17 at 19:03 +0200, Angelo Nicolosi wrote:
> Sorry for the delay of this answer but i was trying to figure out.
> However I saw that the memory that the postgres is using is getting
> larger step by step.
> So it doesn't free it.

If the memory is allocated using palloc() and not freed even after the
query finishes, then you must be using a wrong memory context.

> After the third query it is already full and one of the thread of the
> postgres is killed from the OOM.
> When the process is killed the program usually is going to call again
> a stored function.
> By the way the info that you required me are:
>
>
> postgres (PostgreSQL) 8.4.0
> Linux kernel 2.6.18 64bits
>
>
> For the memory settings I have to contact the system admin because i
> don't have the rights, on that machines, to read the configurations
> file.

do

show work_mem;

from psql;

to see all conf params, do

show all;



> Thank you again to all for your help.
> Cheers,
> Angelo.
>
> > Date: Wed, 16 Sep 2009 09:30:59 -0700
> > From: pierce@hogranch.com
> > To: amenuor@hotmail.com
> > CC: pgsql-jdbc@postgresql.org
> > Subject: Re: [JDBC] OOM on large SELECT
> >
> > Angelo Nicolosi wrote:
> > > It's possible that the problem is in my C code but every time that
> I'm
> > > allocating memory, using always the palloc() function, I'm always
> > > calling the pfree().
> > > There is some way to analyze the code meanwhile is working inside
> the
> > > Postgre server (something like valgrind)?
> > > However the command free -m on my machine outputs:
> > >
> > > total used free shared buffers cached
> > > Mem: 2010 664 1345 0 157 383
> > > -/+ buffers/cache: 123 1886
> > > Swap: 16386 41 16345
> > >
> > > I think that the swap is enough.
> > > Could you give me some tips about how can I see where is the
> problem?
> > > Thank you for your help!
> >
> > do you know what query you were making when you ran out of memory?
> it
> > -appears- it was a postgres server process that was OOM'd.
> >
> > what OS and version are you on (OOM seems to imply its likely
> linux,
> > since no other OS I'm familiar with would randomly kill processes
> like
> > that), what version postgres, etc ?
> >
> > also, what are the various memory settings in your postgresql.conf
> > (shared_buffers, work_mem, etc)
> >
> >
>
>
>
> ______________________________________________________________________
> Una risposta istantanea? Usa Messenger da Hotmail
--
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
   Services, Consulting and Training



Re: OOM on large SELECT

От
Angelo Nicolosi
Дата:
Hello,
the output of show work_mem; is:

 work_mem
----------
 1MB
(1 row)

Is maybe this the problem?

# - Memory -

shared_buffers = 32MB                  # min 128kB
                                         # (change requires restart)
#temp_buffers = 8MB                   # min 800kB
#max_prepared_transactions = 0    # zero disables the feature
                                         # (change requires restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 1MB                         # min 64kB
#maintenance_work_mem = 16MB     # min 1MB
#max_stack_depth = 2MB                # min 100kB

# - Kernel Resource Usage -

#max_files_per_process = 1000      # min 25
                                         # (change requires restart)
#shared_preload_libraries = ''          # (change requires restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms                # 0-100 milliseconds
#vacuum_cost_page_hit = 1               # 0-10000 credits
#vacuum_cost_page_miss = 10             # 0-10000 credits
#vacuum_cost_page_dirty = 20            # 0-10000 credits
#vacuum_cost_limit = 200                # 1-10000 credits

# - Background Writer -

#bgwriter_delay = 200ms                 # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1           # 1-1000. 0 disables prefetching

There is maybe some problem in the configuration?
Thank you again for your help;
Regards,
Angelo.


> Subject: Re: [JDBC] OOM on large SELECT
> From: hannu@2ndQuadrant.com
> To: amenuor@hotmail.com
> CC: pgsql-jdbc@postgresql.org
> Date: Sun, 20 Sep 2009 13:08:23 +0300
>
> On Thu, 2009-09-17 at 19:03 +0200, Angelo Nicolosi wrote:
> > Sorry for the delay of this answer but i was trying to figure out.
> > However I saw that the memory that the postgres is using is getting
> > larger step by step.
> > So it doesn't free it.
>
> If the memory is allocated using palloc() and not freed even after the
> query finishes, then you must be using a wrong memory context.
>
> > After the third query it is already full and one of the thread of the
> > postgres is killed from the OOM.
> > When the process is killed the program usually is going to call again
> > a stored function.
> > By the way the info that you required me are:
> >
> >
> > postgres (PostgreSQL) 8.4.0
> > Linux kernel 2.6.18 64bits
> >
> >
> > For the memory settings I have to contact the system admin because i
> > don't have the rights, on that machines, to read the configurations
> > file.
>
> do
>
> show work_mem;
>
> from psql;
>
> to see all conf params, do
>
> show all;
>
>
>
> > Thank you again to all for your help.
> > Cheers,
> > Angelo.
> >
> > > Date: Wed, 16 Sep 2009 09:30:59 -0700
> > > From: pierce@hogranch.com
> > > To: amenuor@hotmail.com
> > > CC: pgsql-jdbc@postgresql.org
> > > Subject: Re: [JDBC] OOM on large SELECT
> > >
> > > Angelo Nicolosi wrote:
> > > > It's possible that the problem is in my C code but every time that
> > I'm
> > > > allocating memory, using always the palloc() function, I'm always
> > > > calling the pfree().
> > > > There is some way to analyze the code meanwhile is working inside
> > the
> > > > Postgre server (something like valgrind)?
> > > > However the command free -m on my machine outputs:
> > > >
> > > > total used free shared buffers cached
> > > > Mem: 2010 664 1345 0 157 383
> > > > -/+ buffers/cache: 123 1886
> > > > Swap: 16386 41 16345
> > > >
> > > > I think that the swap is enough.
> > > > Could you give me some tips about how can I see where is the
> > problem?
> > > > Thank you for your help!
> > >
> > > do you know what query you were making when you ran out of memory?
> > it
> > > -appears- it was a postgres server process that was OOM'd.
> > >
> > > what OS and version are you on (OOM seems to imply its likely
> > linux,
> > > since no other OS I'm familiar with would randomly kill processes
> > like
> > > that), what version postgres, etc ?
> > >
> > > also, what are the various memory settings in your postgresql.conf
> > > (shared_buffers, work_mem, etc)
> > >
> > >
> >
> >
> >
> > ______________________________________________________________________
> > Una risposta istantanea? Usa Messenger da Hotmail
> --
> Hannu Krosing http://www.2ndQuadrant.com
> PostgreSQL Scalability and Availability
> Services, Consulting and Training
>
>


Scarica Messenger gratis: comunica, divertiti e condividi rapidamente!