Обсуждение: Fwd: Postgres is using 100% CPU
Hi All,
I am using postgresDB on redhat machine which is having 4GB RAM
machine. As soon as it starts to Inserting rows into the postgres DB it will reach
100%cpu. It will comedown to normal after 40 minutes. I tried perform
some tuning on the postgres DB, But result was same.I am not postgres
DB expert. Even we are not seeing in all machine. Only few machines we
are seeing this issue. Any help on this would be appreciated.
Thanks,
Ashik
I am using postgresDB on redhat machine which is having 4GB RAM
machine. As soon as it starts to Inserting rows into the postgres DB it will reach
100%cpu. It will comedown to normal after 40 minutes. I tried perform
some tuning on the postgres DB, But result was same.I am not postgres
DB expert. Even we are not seeing in all machine. Only few machines we
are seeing this issue. Any help on this would be appreciated.
Thanks,
Ashik
Hi All, I am using postgresDB on redhat machine which is having 4GB RAM machine. As soon as it starts to Inserting rows into the postgres DB it will reach 100%cpu. It will comedown to normal after 40 minutes. I tried perform some tuning on the postgres DB, But result was same.I am not postgres DB expert. Even we are not seeing in all machine. Only few machines we are seeing this issue. Any help on this would be appreciated. Thanks, Ashik
> machine. As soon as it starts to Inserting rows into the postgres DB it > will reach 100%cpu. It will comedown to normal after 40 minutes. I tried perform How many rows are you inserting at once? How (sql insert? copy? \copy? using a temp or unlogged table?)? -- http://yves.zioup.com gpg: 4096R/32B0F416
On 05/29/15 20:10, Ashik S L wrote: > Hi All, > > I am using postgresDB on redhat machine which is having 4GB RAM > machine. As soon as it starts to Inserting rows into the postgres DB > it will reach 100%cpu. It will comedown to normal after 40 minutes. I > tried perform some tuning on the postgres DB, But result was same.I > am not postgres DB expert. Even we are not seeing in all machine. > Only few machines we are seeing this issue. Any help on this would > be appreciated. Ashik, before pointing you to this list, I asked for some basic information that are needed when diagnosing issues like this - database size, postgres version etc. We can't really help you without this info, because right now we only know you're doing some inserts (while before you mentioned updates), and it's slow. Also, can you please provide info about the configuration and what changes have you done when tuning it? Have you seen this? https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
We are using postgres SQL version 8.4.17.. Postgres DB szie is 900 MB and we are inserting 273 rows at once .and each row is of 60 bytes.Every time we insert 16380 bytes of data. I tried to make some config changes using above link. But I did not see any improvement. I made following changes in postgres.conf file: shared_buffers = 512MB // It was 32MB work_mem = 30MB effective_cache_size = 512MB // I tried with 128MB 256MB also Please let me know any config changes that I can try out. Thanks, Ashik On 5/30/15, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > > > On 05/29/15 20:10, Ashik S L wrote: >> Hi All, >> >> I am using postgresDB on redhat machine which is having 4GB RAM >> machine. As soon as it starts to Inserting rows into the postgres DB >> it will reach 100%cpu. It will comedown to normal after 40 minutes. I >> tried perform some tuning on the postgres DB, But result was same.I >> am not postgres DB expert. Even we are not seeing in all machine. >> Only few machines we are seeing this issue. Any help on this would >> be appreciated. > > Ashik, before pointing you to this list, I asked for some basic > information that are needed when diagnosing issues like this - database > size, postgres version etc. We can't really help you without this info, > because right now we only know you're doing some inserts (while before > you mentioned updates), and it's slow. > > Also, can you please provide info about the configuration and what > changes have you done when tuning it? > > Have you seen this? > > https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server > > regards > > -- > Tomas Vondra http://www.2ndQuadrant.com > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance >
Hi, On 05/30/15 15:46, Ashik S L wrote: > We are using postgres SQL version 8.4.17.. FYI 8.4 is already unsupported for ~1 year, so you should consider upgrading to a newer release. Also, the newest version in that branch is 8.4.22, so with 8.4.17 you're missing ~1 year of patches. > Postgres DB szie is 900 MB and we are inserting 273 rows at once .and > each row is of 60 bytes.Every time we insert 16380 bytes of data. So you insert 273 rows and it gets stuck for 40 minutes? That's really strange, and I find it rather unlikely even with a badly misconfigured instance. It should easily insert thousands of rows per second. Can you elaborate more about the database structure, or at least the table(s) you're inserting into. Are there any foreign keys (in either direction), indexes or triggers? > I tried to make some config changes using above link. But I did not > see any improvement. > I made following changes in postgres.conf file: > shared_buffers = 512MB // It was 32MB > work_mem = 30MB > effective_cache_size = 512MB // I tried with 128MB 256MB also > > Please let me know any config changes that I can try out. I don't think this has anything to do with configuration. This seems like an issue at the application level, or maybe poorly designed schema. You mentioned you have multiple machines, and only some of them are having this issue. What are the differences between the machines? Are all the machines using the same schema? I assume each has a different amount of data. -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 05/30/2015 09:46 AM, Ashik S L wrote: > We are using postgres SQL version 8.4.17.. > Postgres DB szie is 900 MB and we are inserting 273 rows at once .and > each row is of 60 bytes.Every time we insert 16380 bytes of data. Way back when, I was inserting a lot of rows of date (millions of rows) and it was taking many hours on a machine with 6 10,000 rpm Ultra/320 SCSI hard drives and 8 GBytes of ram. Each insert was a separate transaction. When I bunched up lots of rows (thousaands) into a single transaction, the whole thing took less than an hour. Is it possible that when you insert 273 rows at once, you are doing it as 273 transactions instead of one? -- .~. Jean-David Beyer Registered Linux User 85642. /V\ PGP-Key:166D840A 0C610C8B Registered Machine 1935521. /( )\ Shrewsbury, New Jersey http://linuxcounter.net ^^-^^ 09:00:01 up 3 days, 9:57, 2 users, load average: 4.89, 4.90, 4.91
On 2015-05-31 07:04, Jean-David Beyer wrote: > On 05/30/2015 09:46 AM, Ashik S L wrote: >> We are using postgres SQL version 8.4.17.. >> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and >> each row is of 60 bytes.Every time we insert 16380 bytes of data. > > Way back when, I was inserting a lot of rows of date (millions of rows) > and it was taking many hours on a machine with 6 10,000 rpm Ultra/320 > SCSI hard drives and 8 GBytes of ram. Each insert was a separate > transaction. > > When I bunched up lots of rows (thousaands) into a single transaction, > the whole thing took less than an hour. Or use copy, \copy if possible, or a "temporary" unlogged table to copy from later, etc... > Is it possible that when you insert 273 rows at once, you are doing it > as 273 transactions instead of one? That's the thing, even on an old laptop with a slow IDE disk, 273 individual inserts should not take more than a second. -- http://yves.zioup.com gpg: 4096R/32B0F416
> On 05/30/2015 09:46 AM, Ashik S L wrote:
>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I was inserting a lot of rows of date (millions of rows)
> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
> SCSI hard drives and 8 GBytes of ram. Each insert was a separate
> transaction.
>
> When I bunched up lots of rows (thousaands) into a single transaction,
> the whole thing took less than an hour.
Or use copy, \copy if possible, or a "temporary" unlogged table to copy from
later, etc...
> Is it possible that when you insert 273 rows at once, you are doing it
> as 273 transactions instead of one?
>That's the thing, even on an old laptop with a slow IDE disk, 273 individual
>inserts should not take more than a second.
We are inserting 273 rows at once and its taking less than 1 second. But we will be updating bunch of 273 rows every time which is taking high cpu.
Its like updating 273 rows 2000 to 3000 times. We will be running multiple instances of postgres as well.>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I was inserting a lot of rows of date (millions of rows)
> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
> SCSI hard drives and 8 GBytes of ram. Each insert was a separate
> transaction.
>
> When I bunched up lots of rows (thousaands) into a single transaction,
> the whole thing took less than an hour.
Or use copy, \copy if possible, or a "temporary" unlogged table to copy from
later, etc...
> Is it possible that when you insert 273 rows at once, you are doing it
> as 273 transactions instead of one?
>That's the thing, even on an old laptop with a slow IDE disk, 273 individual
>inserts should not take more than a second.
We are inserting 273 rows at once and its taking less than 1 second. But we will be updating bunch of 273 rows every time which is taking high cpu.
On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman <yves@zioup.com> wrote:
On 2015-05-31 07:04, Jean-David Beyer wrote:
> On 05/30/2015 09:46 AM, Ashik S L wrote:
>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I was inserting a lot of rows of date (millions of rows)
> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320
> SCSI hard drives and 8 GBytes of ram. Each insert was a separate
> transaction.
>
> When I bunched up lots of rows (thousaands) into a single transaction,
> the whole thing took less than an hour.
Or use copy, \copy if possible, or a "temporary" unlogged table to copy from
later, etc...
> Is it possible that when you insert 273 rows at once, you are doing it
> as 273 transactions instead of one?
That's the thing, even on an old laptop with a slow IDE disk, 273 individual
inserts should not take more than a second.
--
http://yves.zioup.com
gpg: 4096R/32B0F416--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Jun 1, 2015 at 12:38 AM, Ashik S L <ashiksl178@gmail.com> wrote: >> On 05/30/2015 09:46 AM, Ashik S L wrote: >>> We are using postgres SQL version 8.4.17.. >>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and >>> each row is of 60 bytes.Every time we insert 16380 bytes of data. >> >> Way back when, I was inserting a lot of rows of date (millions of rows) >> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320 >> SCSI hard drives and 8 GBytes of ram. Each insert was a separate >> transaction. >> >> When I bunched up lots of rows (thousaands) into a single transaction, >> the whole thing took less than an hour. > > Or use copy, \copy if possible, or a "temporary" unlogged table to copy from > later, etc... > >> Is it possible that when you insert 273 rows at once, you are doing it >> as 273 transactions instead of one? > >>That's the thing, even on an old laptop with a slow IDE disk, 273 >> individual >>inserts should not take more than a second. > > We are inserting 273 rows at once and its taking less than 1 second. But we > will be updating bunch of 273 rows every time which is taking high cpu. > Its like updating 273 rows 2000 to 3000 times. We will be running multiple > instances of postgres as well. Something is wrong. This is not typical behavior. Let's rule out some obvious things: *) do you have any triggers on the table *) is your insert based on complicated query *) can you 'explain analyze' the statement you think is causing the cpu issues *) is this truly cpu problem or iowait? *) are there other queries running you are not aware of? let's check the contents of 'pg_stat_activity' when cpu issues are happening what operating system is this? if linux/unix, let's get a 'top' profile and confirm (via pid) that will tell you if your problems are proper cpu or storate based. On default configuration with slow storage, 273 inserts/sec will be the maximum the hardware will support while syncing every transaction (note, this is 'good', and there are many techniques to work around the problem). Try flipping the 'synchronous_commit' setting in postgresql.conf to see if that improves performance. merlin
On Mon, Jun 1, 2015 at 7:20 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > On Mon, Jun 1, 2015 at 12:38 AM, Ashik S L <ashiksl178@gmail.com> wrote: >>> On 05/30/2015 09:46 AM, Ashik S L wrote: >>>> We are using postgres SQL version 8.4.17.. >>>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and >>>> each row is of 60 bytes.Every time we insert 16380 bytes of data. >>> >>> Way back when, I was inserting a lot of rows of date (millions of rows) >>> and it was taking many hours on a machine with 6 10,000 rpm Ultra/320 >>> SCSI hard drives and 8 GBytes of ram. Each insert was a separate >>> transaction. >>> >>> When I bunched up lots of rows (thousaands) into a single transaction, >>> the whole thing took less than an hour. >> >> Or use copy, \copy if possible, or a "temporary" unlogged table to copy from >> later, etc... >> >>> Is it possible that when you insert 273 rows at once, you are doing it >>> as 273 transactions instead of one? >> >>>That's the thing, even on an old laptop with a slow IDE disk, 273 >>> individual >>>inserts should not take more than a second. >> >> We are inserting 273 rows at once and its taking less than 1 second. But we >> will be updating bunch of 273 rows every time which is taking high cpu. >> Its like updating 273 rows 2000 to 3000 times. We will be running multiple >> instances of postgres as well. > > Something is wrong. This is not typical behavior. Let's rule out > some obvious things:performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance OP has not convinced me there's an actual problem. How many inserts per second / minute / hour can these machines handle? Are they handling when they're at 100% CPU. 100% CPU isn't an automatically bad thing. Every query is limited in some way. If you've got some monstrous IO subsystem it's not uncommon for the CPU to be the big limiter. I'm not sure I've seen OP use the word slow anywhere... Just 100% CPU. I'd say we need metrics from iostat, vmstat, iotop, top, htop and performance numbers before deciding there IS a problem. -- To understand recursion, one must first understand recursion.