Обсуждение: About the relation between fragmentation of file and VACUUM

Поиск
Список
Период
Сортировка

About the relation between fragmentation of file and VACUUM

От
"Tatsumi Abe"
Дата:
Question is about the relation between fragmentation of file and VACUUM
performance.

<Environment>
OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)
    Kernel 2.4.21-37.ELsmp on an i686
    Filesystem Type ext3
    Filesystem features:  has_journal filetype needs_recovery sparse_super large_file
CPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01
Memory:2.0GB
HDD:80GB(S-ATA)
     SATA max UDMA/133
PostgreSQL:7.3.8

<DB Environment>
1. Approx. there are 3500 tables in the DB
2. Index is attached to table.
3. Every two minutes interval approx. 10,000 records are inserted into 1000tables.
   So total 7,200,000 records are inserted in 1000 tables per days
4. Tables data are saved for 7 days.Older than 7 days data are deleted.
   So maximum total records 50,400,000 can be exist in DB.
5. VACCUME is executed against DB as below
    Six times a day i.e. every 4 hours the VACCUME ANALYZE is started once.
    And out of the six times once VACCUME FULL ANALYZE is processed.

At the beginning, volume of data increases linearly because
the records are added for seven days. After seven days older than
seven days data are deleted. So volume of data will not increase
after seventh days.

When the performance of inserting data was measured in the above-
mentioned environment, it takes six minutes to write 10000 lines
after 4/5 days the measurement had begun. While searching the reason
of bottleneck by executing iostat command it is understood that DISK I/O
was problem for the neck as %iowait was almost 100% at that time.

On the very first day processing time of VACUUM is not a problem but
when the day progress its process time is increasing.Then I examined the
fragmentation of database area(pgsql/data/base) by using the following tools.

Disk Allocation Viewer
http://sourceforge.net/projects/davtools/

Fragmentation rate is 28% before defrag.

The processing time of VACUUM became 20 minutes, and also inserting data
took short time, when data base area (pgsql/data/base) was copied, deleted,
copied again, and the fragmentation was canceled (defrag).

Moreover, After the fragmentation cancelled the processing time for VACCUM
was 20 minutes, but after 7 days it took  40 minutes for processing.When again
checked the fragmentation rate with the tool it was 11%.Therefore, it is
understood that the fragmentation progresses again.

However, In my current environment I can't stop PostgreSQL and cancel
fragmentation.

Could anyone advise some solutions for this fragmentation problem
without stopping PostgreSQL ? For example, using the followings or anything
else..

-Tuning of postgresql.conf
-PostgreSQL(8.1.0) of latest version and VACUUM

Thanks in advance.
Abe

Re: About the relation between fragmentation of file and

От
Richard Huxton
Дата:
Tatsumi Abe wrote:
> Question is about the relation between fragmentation of file and VACUUM
> performance.
>
> <Environment>
> OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)
>     Kernel 2.4.21-37.ELsmp on an i686
>     Filesystem Type ext3
>     Filesystem features:  has_journal filetype needs_recovery sparse_super large_file
> CPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01
> Memory:2.0GB
> HDD:80GB(S-ATA)
>      SATA max UDMA/133
> PostgreSQL:7.3.8
>
> <DB Environment>
> 1. Approx. there are 3500 tables in the DB

> When the performance of inserting data was measured in the above-
> mentioned environment, it takes six minutes to write 10000 lines
> after 4/5 days the measurement had begun. While searching the reason
> of bottleneck by executing iostat command it is understood that DISK I/O
> was problem for the neck as %iowait was almost 100% at that time.
>
> On the very first day processing time of VACUUM is not a problem but
> when the day progress its process time is increasing.Then I examined the
> fragmentation of database area(pgsql/data/base) by using the following tools.
>
> Disk Allocation Viewer
> http://sourceforge.net/projects/davtools/
>
> Fragmentation rate is 28% before defrag.

I'd guess the root of your problem is the number of tables (3500), which
if each has one index represents at least 7000 files. That means a lot
of your I/O time will probably be spent moving the disk heads between
the different files.

You say you can't stop the server, so there's no point in thinking about
a quick hardware upgrade to help you. Also a version-upgrade is not
do-able for you.

I can only think of two other options:
1. Change the database schema to reduce the number of tables involved.
I'm assuming that of the 3500 tables most hold the same data but for
different clients (or something similar). This might not be practical
either.

2. Re-order how you access the database. ANALYSE the updated tables
regularly, but only VACUUM them after deletions. Group your inserts so
that all the inserts for table1 go together, then all the inserts for
table2 go together and so on. This should help with the fragmentation by
making sure the files get extended in larger chunks.

Are you sure it's not possible to spend 15 mins offline to solve this?
--
  Richard Huxton
  Archonet Ltd

Re: About the relation between fragmentation of file and

От
Michael Stone
Дата:
On Thu, Dec 01, 2005 at 02:50:56PM +0900, Tatsumi Abe wrote:
>Could anyone advise some solutions for this fragmentation problem
>without stopping PostgreSQL ?

Stop doing VACUUM FULL so often. If your table size is constant anyway
you're just wasting time by compacting the table and shrinking it, and
encouraging fragmentation as each table file grows then shrinks a little
bit each day.

Mike Stone

Re: About the relation between fragmentation of file and VACUUM

От
Bill McGonigle
Дата:
On Dec 1, 2005, at 00:50, Tatsumi Abe wrote:

> However, In my current environment I can't stop PostgreSQL and cancel
> fragmentation.
>
> Could anyone advise some solutions for this fragmentation problem
> without stopping PostgreSQL ?

This is somewhat of an aside and intended just as a helpful suggestion
since I've been in this spot before: if you have this kind of uptime
requirement the first project to work on is getting the environment to
the point where you can take out at least one database server at a time
for maintenance.  You're going to be forced to do this sooner or later
- whether by disk failure, software error (Pg or OS), user error
(restore from backup) or security issues (must patch fixes).

So disk fragmentation is a great thing to worry about at some point,
but IMHO you've got your neck under the guillotine and worrying about
your cuticles.

I've heard the arguments before, usually around budget, and if the
company can't spend any money but needs blood-from-stone performance
tweaks, somebody isn't doing the math right (I'm assuming this isn't
running on a satellite).  Plus, your blood pressure will go down when
things are more resilient.  I've tried the superhero thing before and
it's just not worth it.

-Bill
-----
Bill McGonigle, Owner           Work: 603.448.4440
BFC Computing, LLC              Home: 603.448.1668
bill@bfccomputing.com           Mobile: 603.252.2606
http://www.bfccomputing.com/    Pager: 603.442.1833
Jabber: flowerpt@gmail.com      Text: bill+text@bfccomputing.com
Blog: http://blog.bfccomputing.com/


Re: About the relation between fragmentation of file and

От
David Lang
Дата:
On Thu, 1 Dec 2005, Richard Huxton wrote:

> Tatsumi Abe wrote:
>> Question is about the relation between fragmentation of file and VACUUM
>> performance.
>>
>> <Environment>
>> OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)
>>     Kernel 2.4.21-37.ELsmp on an i686
>>     Filesystem Type ext3
>>     Filesystem features:  has_journal filetype needs_recovery sparse_super large_file

try different filesystems, ext2/3 do a very poor job when you have lots of
files in a directory (and 7000+ files is a lot). you can also try mounting
the filesystem with noatime, nodiratime to reduce the seeks when reading,
and try mounting it with oldalloc (which changes how the files are
arranged on disk when writing and extending them), I've seen drastic
speed differences between ext2 and ext3 based on this option (ext2
defaults to oldalloc, ext3 defaults to orlov, which is faster in many
cases)

>> CPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01
>> Memory:2.0GB
>> HDD:80GB(S-ATA)
>>      SATA max UDMA/133
>> PostgreSQL:7.3.8
>>
>> <DB Environment>
>> 1. Approx. there are 3500 tables in the DB
>
>> When the performance of inserting data was measured in the above-
>> mentioned environment, it takes six minutes to write 10000 lines
>> after 4/5 days the measurement had begun. While searching the reason
>> of bottleneck by executing iostat command it is understood that DISK I/O
>> was problem for the neck as %iowait was almost 100% at that time.
>>
>> On the very first day processing time of VACUUM is not a problem but
>> when the day progress its process time is increasing.Then I examined the
>> fragmentation of database area(pgsql/data/base) by using the following tools.
>>
>> Disk Allocation Viewer
>> http://sourceforge.net/projects/davtools/
>>
>> Fragmentation rate is 28% before defrag.
>
> I'd guess the root of your problem is the number of tables (3500), which
> if each has one index represents at least 7000 files. That means a lot
> of your I/O time will probably be spent moving the disk heads between
> the different files.

depending on the size of the tables it can actually be a lot worse then
this (remember Postgres splits the tables into fixed size chunks)

when postgres adds data it will eventually spill over into additional
files, when you do a vaccum does it re-write the tables into a smaller
number of files or just rewrite the individual files (makeing each of them
smaller, but keeping the same number of files)

speaking of this, the selection of the size of these chunks is a
comprimize between the time needed to seek in an individual file and the
number of files that are created, is there an easy way to tinker with this
(I am sure the default is not correct for all filesystems, the filesystem
handling of large and/or many files differ drasticly)

> You say you can't stop the server, so there's no point in thinking about
> a quick hardware upgrade to help you. Also a version-upgrade is not
> do-able for you.

there's a difference between stopping the server once for an upgrade
(hardware or software) and having to stop it every few days to defrag
things forever after.

David Lang

> I can only think of two other options:
> 1. Change the database schema to reduce the number of tables involved.
> I'm assuming that of the 3500 tables most hold the same data but for
> different clients (or something similar). This might not be practical
> either.
>
> 2. Re-order how you access the database. ANALYSE the updated tables
> regularly, but only VACUUM them after deletions. Group your inserts so
> that all the inserts for table1 go together, then all the inserts for
> table2 go together and so on. This should help with the fragmentation by
> making sure the files get extended in larger chunks.
>
> Are you sure it's not possible to spend 15 mins offline to solve this?
> --
>  Richard Huxton
>  Archonet Ltd
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match
>