Обсуждение: Postgress is taking lot of CPU on our embedded hardware.
Hi,
We are having an embedded system with a freescale m68k architecture based micro-controller, 256MB RAM running a customized version of Slackware 12 linux.
It’s a relatively modest Hardware.
We have installed postgres 9.1 as our database engine. While testing, we found that the Postgres operations take more than 70% of CPU and the average also stays above 40%.
This is suffocating the various other processes running on the system. Couple of them are very critical ones.
The testing involves inserting bulk number of records (approx. 10000 records having between 10 and 20 columns).
Please let us know how we can reduce CPU usage for the postgres.
Thanks and Regards
Jayashankar
Larsen & Toubro Limited
www.larsentoubro.com
This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.
On 27.01.2012 15:34, Jayashankar K B wrote: > Hi, > > We are having an embedded system with a freescale m68k architecture based micro-controller, 256MB RAM running a customizedversion of Slackware 12 linux. > It's a relatively modest Hardware. Fascinating! > We have installed postgres 9.1 as our database engine. While testing, we found that the Postgres operations take more than70% of CPU and the average also stays above 40%. > This is suffocating the various other processes running on the system. Couple of them are very critical ones. > The testing involves inserting bulk number of records (approx. 10000 records having between 10 and 20 columns). > Please let us know how we can reduce CPU usage for the postgres. The first step would be to figure out where all the time is spent. Are there unnecessary indexes you could remove? Are you using INSERT statements or COPY? Sending the data in binary format instead of text might shave some cycles. If you can run something like oprofile on the system, that would be helpful to pinpoint the expensive part. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On 1/27/2012 10:47 AM, Heikki Linnakangas wrote: > On 27.01.2012 15:34, Jayashankar K B wrote: >> Hi, >> >> We are having an embedded system with a freescale m68k architecture >> based micro-controller, 256MB RAM running a customized version of >> Slackware 12 linux. >> It's a relatively modest Hardware. > > Fascinating! > >> We have installed postgres 9.1 as our database engine. While testing, >> we found that the Postgres operations take more than 70% of CPU and >> the average also stays above 40%. >> This is suffocating the various other processes running on the system. >> Couple of them are very critical ones. >> The testing involves inserting bulk number of records (approx. 10000 >> records having between 10 and 20 columns). >> Please let us know how we can reduce CPU usage for the postgres. > > The first step would be to figure out where all the time is spent. Are > there unnecessary indexes you could remove? Are you using INSERT > statements or COPY? Sending the data in binary format instead of text > might shave some cycles. > Do you have triggers on the table?
Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database. Sending data in binary is not an option as the module that writes into DB has been finalized. We do not have control over that. Hi Andy: As of now, there are no triggers in the table. Please let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been used onlimited Hardware system. We are starting to feel if Postgres was a good choice for us..! Thanks and Regards Jay -----Original Message----- From: Andy Colson [mailto:andy@squeakycode.net] Sent: Friday, January 27, 2012 10:45 PM To: Heikki Linnakangas Cc: Jayashankar K B; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware. On 1/27/2012 10:47 AM, Heikki Linnakangas wrote: > On 27.01.2012 15:34, Jayashankar K B wrote: >> Hi, >> >> We are having an embedded system with a freescale m68k architecture >> based micro-controller, 256MB RAM running a customized version of >> Slackware 12 linux. >> It's a relatively modest Hardware. > > Fascinating! > >> We have installed postgres 9.1 as our database engine. While testing, >> we found that the Postgres operations take more than 70% of CPU and >> the average also stays above 40%. >> This is suffocating the various other processes running on the system. >> Couple of them are very critical ones. >> The testing involves inserting bulk number of records (approx. 10000 >> records having between 10 and 20 columns). >> Please let us know how we can reduce CPU usage for the postgres. > > The first step would be to figure out where all the time is spent. Are > there unnecessary indexes you could remove? Are you using INSERT > statements or COPY? Sending the data in binary format instead of text > might shave some cycles. > Do you have triggers on the table? Larsen & Toubro Limited www.larsentoubro.com This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intendedrecipient, please do not use or disseminate the information, notify the sender and delete it from your system.
On 27.01.2012 20:30, Jayashankar K B wrote: > Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database. > Sending data in binary is not an option as the module that writes into DB has been finalized. > We do not have control over that. That certainly limits your options. > Please let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been usedon limited Hardware system. I don't think there's anything particular in postgres that would make it a poor choice on a small system, as far as CPU usage is concerned anyway. But inserting rows in a database is certainly slower than, say, writing them into a flat file. At what rate are you doing the INSERTs? And how fast would they need to be? Remember that it's normal that while the INSERTs are running, postgres will use all the CPU it can to process them as fast as possible. So the question is, at what rate do they need to be processed to meet your target. Lowering the process priority with 'nice' might help too, to give the other important processes priority over postgres. The easiest way to track down where the time is spent would be to run a profiler, if that's possible on your platform. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: > I don't think there's anything particular in postgres that would make it a > poor choice on a small system, as far as CPU usage is concerned anyway. But > inserting rows in a database is certainly slower than, say, writing them > into a flat file. How did you install postgres? Did you build it? Which configure flags did you use? Exactly which m68k cpu is it? (it does matter) For instance... wiki: "However, a significant difference is that the 68060 FPU is not pipelined and is therefore up to three times slower than the Pentium in floating point applications" This means, if you don't configure the build correctly, you will get really sub-optimal code. Modern versions are optimized for modern cpus. Of utmost importance, I would imagine, is the binary format chosen for pg data types (floating types especially, if you use them).
On Fri, Jan 27, 2012 at 6:34 AM, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > Hi, > > We are having an embedded system with a freescale m68k architecture based > micro-controller, 256MB RAM running a customized version of Slackware 12 > linux. > > It’s a relatively modest Hardware. > > We have installed postgres 9.1 as our database engine. While testing, we > found that the Postgres operations take more than 70% of CPU and the average > also stays above 40%. Not to dissuade you from using pgsql, but have you tried other dbs like the much simpler SQL Lite?
Hi, The number of inserts into the database would be a minimum of 3000 records in one operation.. We do not have any stringentrequirement of writing speed. So we could make do with a slower write speed as long as the CPU usage is not heavy... :) We will try reducing the priority and check once. Our database file is located on a class 2 SD Card. So it is understandable if there is lot of IO activity and speed is less. But we are stumped by the amount of CPU Postgres is eating up. Any configuration settings we could check up? Given our Hardware config, are following settings ok? Shared Buffers: 24MB Effective Cache Size: 128MB We are not experienced with database stuff. So some expert suggestions would be helpful :) Thanks and Regards Jayashankar -----Original Message----- From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Heikki Linnakangas Sent: Saturday, January 28, 2012 1:27 AM To: Jayashankar K B Cc: Andy Colson; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware. On 27.01.2012 20:30, Jayashankar K B wrote: > Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database. > Sending data in binary is not an option as the module that writes into DB has been finalized. > We do not have control over that. That certainly limits your options. > Please let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been usedon limited Hardware system. I don't think there's anything particular in postgres that would make it a poor choice on a small system, as far as CPU usageis concerned anyway. But inserting rows in a database is certainly slower than, say, writing them into a flat file. At what rate are you doing the INSERTs? And how fast would they need to be? Remember that it's normal that while the INSERTsare running, postgres will use all the CPU it can to process them as fast as possible. So the question is, at whatrate do they need to be processed to meet your target. Lowering the process priority with 'nice' might help too, to givethe other important processes priority over postgres. The easiest way to track down where the time is spent would be to run a profiler, if that's possible on your platform. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance Larsen & Toubro Limited www.larsentoubro.com This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intendedrecipient, please do not use or disseminate the information, notify the sender and delete it from your system.
Hi, I downloaded the source code and cross compiled it into a relocatable package and copied it to the device. LTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU. Here is the configure options I used. ./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0-fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/ Any other special flags that could be of help to us? Thanks and Regards Jayashankar -----Original Message----- From: Claudio Freire [mailto:klaussfreire@gmail.com] Sent: Saturday, January 28, 2012 7:54 AM To: Heikki Linnakangas Cc: Jayashankar K B; Andy Colson; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware. On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: > I don't think there's anything particular in postgres that would make > it a poor choice on a small system, as far as CPU usage is concerned > anyway. But inserting rows in a database is certainly slower than, > say, writing them into a flat file. How did you install postgres? Did you build it? Which configure flags did you use? Exactly which m68k cpu is it? (it does matter) For instance... wiki: "However, a significant difference is that the 68060 FPU is not pipelined and is therefore up to three times slowerthan the Pentium in floating point applications" This means, if you don't configure the build correctly, you will get really sub-optimal code. Modern versions are optimizedfor modern cpus. Of utmost importance, I would imagine, is the binary format chosen for pg data types (floating types especially, if you usethem). Larsen & Toubro Limited www.larsentoubro.com This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intendedrecipient, please do not use or disseminate the information, notify the sender and delete it from your system.
Hello, One thing you may look at are the index and constraints on the relations. If you have multiple constraints or index this may add CPU time on each insert. You may try to drop the index, do a bulk load, and then recreate the index. This may (or may not) reduce the total time / CPU but it could allow you to push a bulk insert to a specific time. It would be good to use "COPY", or at least give it a test to see if it is worth it. If removing the index does significantly help with the insert, then you may also try a different index (HASH or B-Tree, GIST). It may be possible that a specific index creation does not work efficiently on that architecture... http://www.postgresql.org/docs/9.1/static/sql-createindex.html Deron On Sat, Jan 28, 2012 at 10:21 AM, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > Hi, > > I downloaded the source code and cross compiled it into a relocatable package and copied it to the device. > LTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU. > Here is the configure options I used. > > ./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0-fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/ > > Any other special flags that could be of help to us? > > Thanks and Regards > Jayashankar > > -----Original Message----- > From: Claudio Freire [mailto:klaussfreire@gmail.com] > Sent: Saturday, January 28, 2012 7:54 AM > To: Heikki Linnakangas > Cc: Jayashankar K B; Andy Colson; pgsql-performance@postgresql.org > Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware. > > On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> wrote: >> I don't think there's anything particular in postgres that would make >> it a poor choice on a small system, as far as CPU usage is concerned >> anyway. But inserting rows in a database is certainly slower than, >> say, writing them into a flat file. > > How did you install postgres? > Did you build it? > Which configure flags did you use? > Exactly which m68k cpu is it? (it does matter) > > For instance... > > wiki: "However, a significant difference is that the 68060 FPU is not pipelined and is therefore up to three times slowerthan the Pentium in floating point applications" > > This means, if you don't configure the build correctly, you will get really sub-optimal code. Modern versions are optimizedfor modern cpus. > Of utmost importance, I would imagine, is the binary format chosen for pg data types (floating types especially, if youuse them). > > > Larsen & Toubro Limited > > www.larsentoubro.com > > This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intendedrecipient, please do not use or disseminate the information, notify the sender and delete it from your system. > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Jan 27, 2012 at 10:30 AM, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database. > Sending data in binary is not an option as the module that writes into DB has been finalized. > We do not have control over that. > > Hi Andy: As of now, there are no triggers in the table. What about indexes? Cheers, Jeff
On Sat, Jan 28, 2012 at 2:21 PM, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > > ./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0-fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/ > > Any other special flags that could be of help to us? Well, it's a tough issue, because you'll have to test every change to see if it really makes a difference or not. But you might try --disable-float8-byval, --disable-spinlocks. On the compiler front (CFLAGS), you should have -mtune=54418 (or perhaps -mtune=cfv4) (-march and -mcpu don't imply -mtune), and even perhaps -O2 or -O3. I also see you're specifying -msoft-float. So that's probably your problem, any floating point arithmetic you're doing is killing you. But without access to the software in order to change the data types, you're out of luck in that department.
If you can batch the inserts into groups (of say 10 to 100) it might help performance - i.e: Instead of INSERT INTO table VALUES(...); INSERT INTO table VALUES(...); ... INSERT INTO table VALUES(...); do INSERT INTO table VALUES(...),(...),...,(...); This reduces the actual number of INSERT calls, which can be quite a win. Regards Mark On 28/01/12 07:30, Jayashankar K B wrote: > Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database. > Sending data in binary is not an option as the module that writes into DB has been finalized. > We do not have control over that. > >
Greetings, On Sat, Jan 28, 2012 at 12:51 PM, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > Hi, > > I downloaded the source code and cross compiled it into a relocatable package and copied it to the device. > LTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU. > Here is the configure options I used. Ok, no floating point, and just ~250MHz... small. Anyway, lets not talk about hardware options, because you already have it. About kernel, I'm not sure if on this arch you have the option, but did you enable "PREEMPT" kernel config option? (on menuconfig: "Preemptible Kernel (Low-Latency Desktop)").... Or, is that a RT kernel? With such a small CPU, almost any DB engine you put there will be CPU-hungry, but if your CPU usage is under 95%, you know you still have some CPU to spare, on the other hand, if you are 100% CPU, you have to evaluate required response time, and set priorities accordingly.. However, I have found that, even with processes with nice level 19 using 100% CPU, other nice level 0 processes will slow-down unless I set PREEMPT option to on kernel compile options (other issue are IO wait times, at least on my application that uses CF can get quite high). Sincerely, Ildefonso Camargo
On Sat, Jan 28, 2012 at 19:11, Jayashankar K B <Jayashankar.KB@lnties.com> wrote: > But we are stumped by the amount of CPU Postgres is eating up. You still haven't told us *how* slow it actually is and how fast you need it to be? What's your database layout like (tables, columns, indexes, foreign keys)? What do the queries look like that you have problems with? > Our database file is located on a class 2 SD Card. So it is understandable if there is lot of IO activity and speed isless. Beware that most SD cards are unfit for database write workloads, since they only perform very basic wear levelling (in my experience anyway -- things might have changed, but I'm doubtful). It's a matter of time before you wear out some frequently-written blocks and they start returning I/O errors or corrupted data. If you can spare the disk space, increase checkpoint_segments, as that means at least WAL writes are spread out over a larger number of blocks. (But heap/index writes are still a problem) They can also corrupt your data if you lose power in the middle of a write -- since they use much larger physical block sizes than regular hard drives and it can lose the whole block, which file systems or Postgres are not designed to handle. They also tend to not respect flush/barrier requests that are required for database consistency. Certainly you should do such power-loss tests before you release your product. I've built an embedded platform with a database. Due to disk corruptions, in the end I opted for mounting all file systems read-only and keeping the database only in RAM. > Any configuration settings we could check up? For one, you should reduce max_connections to a more reasonable number -- I'd guess you don't need more than 5 or 10 concurrent connections. Also set synchronous_commit=off; this means that you may lose some committed transactions after power loss, but I think with SD cards all bets are off anyway. Regards, Marti