Обсуждение: out of memory error

Поиск
Список
Период
Сортировка

out of memory error

От
Mark Striebeck
Дата:
Hi,

we are using Postgres with a J2EE application (JBoss) and get
intermittent "out of memory" errors on the Postgres database. We are
running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the
following parameters:

shared_buffers = 8192
sort_mem = 8192
effective_cache_size = 234881024
random_page_cost = 2

(everything else is default)

The error message in the log is:

Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-1] ERROR:  53200:
out of memory
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-2] DETAIL:  Failed
on request of size 208.
Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-3] LOCATION:
AllocSetAlloc, aset.c:700

All failures are with the following query (again, it only fails every
now and then). The query returns only a few results:
STATEMENT:  SELECT
    Tracker_Artifact0.id AS id,
    Tracker_Artifact0.priority AS priority,
    Tracker_Artifact_extension_f1.path AS projectPathString,
    Tracker_Artifact_extension_f1.title AS projectTitle,
    Tracker_Artifact_extension_f0.project_id AS projectId,
    Tracker_Artifact_extension_f0.path AS folderPathString,
    Tracker_Artifact_extension_f0.title AS folderTitle,
    Tracker_Artifact_extension0.folder_id AS folderId,
    Tracker_Artifact_extension0.title AS title,
    Tracker_Artifact_extension0.name AS name,
    Tracker_Artifact0.description AS description,
    Tracker_Artifact_group0.value AS artifactGroup,
    Tracker_Artifact_status0.value AS status,
    Tracker_Artifact_status0.value_class AS statusClass,
    Tracker_Artifact_category0.value AS category,
    Tracker_Artifact_customer0.value AS customer,
    Tracker_Artifact_extension_c0.username AS submittedByUsername,
    Tracker_Artifact_extension_c0.full_name AS submittedByFullname,
    Tracker_Artifact_extension0.date_created AS submittedDate,
    Tracker_Artifact0.close_date AS closeDate,
    Tracker_Artifact_ArtifactAss0.username AS assignedToUsername,
    Tracker_Artifact_ArtifactAss0.full_name AS assignedToFullname,
    Tracker_Artifact_extension0.date_last_modified AS lastModifiedDate,
    Tracker_Artifact0.estimated_hours AS estimatedHours,
    Tracker_Artifact0.actual_hours AS actualHours,
    Tracker_Artifact_extension0.version AS version
FROM
    field_value Tracker_Artifact_group0,
    item Tracker_Artifact_extension0,
    relationship Tracker_Artifact_relation_Ar0,
    sfuser Tracker_Artifact_ArtifactAss0,
    field_value Tracker_Artifact_status0,
    field_value Tracker_Artifact_category0,
    field_value Tracker_Artifact_customer0,
    folder Tracker_Artifact_extension_f0,
    artifact Tracker_Artifact0,
    project Tracker_Artifact_extension_f1,
    sfuser Tracker_Artifact_extension_c0
WHERE
    Tracker_Artifact0.id=Tracker_Artifact_extension0.id
    AND
Tracker_Artifact_extension0.folder_id=Tracker_Artifact_extension_f0.id
    AND
Tracker_Artifact_extension_f0.project_id=Tracker_Artifact_extension_f1.id
    AND Tracker_Artifact0.group_fv=Tracker_Artifact_group0.id
    AND Tracker_Artifact0.status_fv=Tracker_Artifact_status0.id
    AND Tracker_Artifact0.category_fv=Tracker_Artifact_category0.id
    AND Tracker_Artifact0.customer_fv=Tracker_Artifact_customer0.id
    AND
Tracker_Artifact_extension0.created_by_id=Tracker_Artifact_extension_c0.id
    AND Tracker_Artifact_relation_Ar0.is_deleted=false
    AND
Tracker_Artifact_relation_Ar0.relationship_type_name='ArtifactAssignment'
    AND
Tracker_Artifact_relation_Ar0.origin_id=Tracker_Artifact_ArtifactAss0.id
    AND Tracker_Artifact0.id=Tracker_Artifact_relation_Ar0.target_id
    AND (Tracker_Artifact_extension_f1.path='projects' OR
Tracker_Artifact_extension_f1.path LIKE 'projects.%')
    AND Tracker_Artifact_extension0.is_deleted=false
    AND Tracker_Artifact_status0.value_class='Open'
    AND Tracker_Artifact_ArtifactAss0.username='foundr12622313'
ORDER BY
    priority ASC,
    lastModifiedDate DESC

Can anyone see anything dangerous about this query? What's the best way
to analyze this further?

Thanks
    MarkS

Re: out of memory error

От
Richard Huxton
Дата:
Mark Striebeck wrote:
> Hi,
>
> we are using Postgres with a J2EE application (JBoss) and get
> intermittent "out of memory" errors on the Postgres database. We are
> running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the
> following parameters:
>
> shared_buffers = 8192
> sort_mem = 8192
> effective_cache_size = 234881024
> random_page_cost = 2

The effective_cache_size is measured in disk-blocks not bytes, so you'll
want to reduce that. It should be whatever the typical "cached" readout
of top is, divided by 8k.

> (everything else is default)
>
> The error message in the log is:
>
> Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-1] ERROR:  53200:
> out of memory
> Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-2] DETAIL:  Failed
> on request of size 208.
> Jun 10 17:20:04 cruisecontrol-rhea postgres[6856]: [6-3] LOCATION:
> AllocSetAlloc, aset.c:700

What is the system's overall memory usage at this time? Is there one
postgresql backend using all this memory?

> All failures are with the following query (again, it only fails every
> now and then). The query returns only a few results:
[snip]
> Can anyone see anything dangerous about this query?

The only thing that struck me was you had 11 tables in the join which
means the geqo query-planner will kick in (assuming default config
values). If you can reproduce it regularly, you could try increasing
geqo_threshold and see if that had any effect.

 > What's the best way to analyze this further?

1. Monitor memory usage when you run the query, see which process is
using what.
2. Get EXPLAIN ANALYSE for that query, there may be something unusual in
the plan.
3. Finally, might have to attach a debugger to a backend, but we'll need
to know what to look for first.

--
   Richard Huxton
   Archonet Ltd

Re: out of memory error

От
Martin Marques
Дата:
El Jue 10 Jun 2004 22:14, escribió:
> Hi,
>
> we are using Postgres with a J2EE application (JBoss) and get
> intermittent "out of memory" errors on the Postgres database. We are
> running on a fairly large Linux server (Dual 3GHz, 2GB Ram) with the
> following parameters:
>
> shared_buffers = 8192

This is very low for a 2GB server. Unless the server is overdimensioned. Think
that those 8192 are about 64 MB, and you have 2000 MB of fisical memory.

Also check that query which is hugh, and has to many unions.

--
 08:40:02 up 27 days, 18:51,  1 user,  load average: 1.16, 1.07, 0.75
-----------------------------------------------------------------
Martín Marqués        | select 'mmarques' || '@' || 'unl.edu.ar'
Centro de Telematica  |  DBA, Programador, Administrador
             Universidad Nacional
                  del Litoral
-----------------------------------------------------------------