Обсуждение: how to slow down parts of Pg

Поиск
Список
Период
Сортировка

how to slow down parts of Pg

От
Kevin Brannen
Дата:

I have an unusual need:  I need Pg to slow down. I know, we all want our DB to go faster, but in this case it's speed is working against me in 1 area.

 

We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync, which it does at the block level. For normal operations, it actually works out fairly well. That said, we recognize that what we really need to do is one of the forms of streaming (ch 26 of the manual) which I believe would help this problem a lot if not solve it -- but we don't have the time to do that at the moment. I plan and hope to get there by the end of the year. The part that hurts so bad is when we do maintenance operations that are DB heavy, like deleting really old records out of archives (weekly), moving older records from current tables to archive tables plus an analyze (every night), running pg_backup (every night), other archiving (weekly), and vacuum full to remove bloat (once a quarter). All of this generates a lot of disk writes, to state the obvious.

 

The local server can handle it all just fine, but the network can't handle it as it tries to sync to the other server. Sometimes we can add network bandwidth, many times we can't as it depends on others. To borrow a phrase from the current times, we need to flatten the curve. 😊

 

A few parts of our maintenance process I've tamed by doing "nice -20" on the process (e.g. log rotation); but I can't really do that for Pg because the work gets handed off to a background process that's not a direct child process … and I don't want to slow the DB as a whole because other work is going on (like handling incoming data).

 

Part of the process I've slowed down by doing the work in chunks of 10K rows at a time with a pause between each chunk to allow the network to catch up (instead of an entire table in 1 statement). This sort of works, but some work/SQL is between hard to next-to-impossible to break up like that. That also produces some hard spikes, but that's better than the alternative (next sentence). Still, large portions of the process are hard to control and just punch the network to full capacity and hold it there for far too long.

 

So, do I have any other options to help slow down some of the Pg operations? Or maybe some other short-term mitigations we can do with Pg configurations? Or is this a case where we've already done all we can do and the only answer is move to WAL streaming as fast as possible?

 

If it matters, this is being run on Linux servers. Pg 12.2 is in final testing and will be rolled out to production soon -- so feel free to offer suggestions that only apply to 12.x.

 

Thanks,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

Re: how to slow down parts of Pg

От
Michael Lewis
Дата:
You say 12.2 is in testing but what are you using now? Have you tuned configs much? Would you be able to implement partitioning such that your deletes become truncates or simply a detaching of the old partition? Generally if you are doing a vacuum full, you perhaps need to tune autovacuum to be more aggressive. Consider pg_repack at least to avoid taking an exclusive lock for the entire duration. If partitioning is not an option, could you delete old records hourly rather than daily?

Re: how to slow down parts of Pg

От
Michael Loftis
Дата:
drbdsetup allows you to control the sync rates.

On Tue, Apr 21, 2020 at 14:30 Kevin Brannen <KBrannen@efji.com> wrote:

I have an unusual need:  I need Pg to slow down. I know, we all want our DB to go faster, but in this case it's speed is working against me in 1 area.

 

We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync, which it does at the block level. For normal operations, it actually works out fairly well. That said, we recognize that what we really need to do is one of the forms of streaming (ch 26 of the manual) which I believe would help this problem a lot if not solve it -- but we don't have the time to do that at the moment. I plan and hope to get there by the end of the year. The part that hurts so bad is when we do maintenance operations that are DB heavy, like deleting really old records out of archives (weekly), moving older records from current tables to archive tables plus an analyze (every night), running pg_backup (every night), other archiving (weekly), and vacuum full to remove bloat (once a quarter). All of this generates a lot of disk writes, to state the obvious.

 

The local server can handle it all just fine, but the network can't handle it as it tries to sync to the other server. Sometimes we can add network bandwidth, many times we can't as it depends on others. To borrow a phrase from the current times, we need to flatten the curve. 😊

 

A few parts of our maintenance process I've tamed by doing "nice -20" on the process (e.g. log rotation); but I can't really do that for Pg because the work gets handed off to a background process that's not a direct child process … and I don't want to slow the DB as a whole because other work is going on (like handling incoming data).

 

Part of the process I've slowed down by doing the work in chunks of 10K rows at a time with a pause between each chunk to allow the network to catch up (instead of an entire table in 1 statement). This sort of works, but some work/SQL is between hard to next-to-impossible to break up like that. That also produces some hard spikes, but that's better than the alternative (next sentence). Still, large portions of the process are hard to control and just punch the network to full capacity and hold it there for far too long.

 

So, do I have any other options to help slow down some of the Pg operations? Or maybe some other short-term mitigations we can do with Pg configurations? Or is this a case where we've already done all we can do and the only answer is move to WAL streaming as fast as possible?

 

If it matters, this is being run on Linux servers. Pg 12.2 is in final testing and will be rolled out to production soon -- so feel free to offer suggestions that only apply to 12.x.

 

Thanks,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.
--

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler

RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:

From: Michael Lewis <mlewis@entrata.com>

> You say 12.2 is in testing but what are you using now? Have you tuned configs much? Would you be able to implement partitioning such that your deletes become truncates or simply a detaching of the old partition? Generally if you are doing a vacuum full, you perhaps need to tune autovacuum to be more aggressive. Consider pg_repack at least to avoid taking an exclusive lock for the entire duration. If partitioning is not an option, could you delete old records hourly rather than daily?

 

Good questions, it's always hard to know how much to include. 😊

 

Current production is 9.6, so things like partitioning aren't available there, but will be in the future.

 

We've tuned the configs some and don't having any issues with Pg at the moment. This does need to be relooked at; I have a few notes of things to revisit as our hardware changes.

 

Partitioning our larger tables by time is on the ToDo list. I hadn't thought about that helping with maintenance, so thanks for bringing that up. I'll increase the priority of this work as I can see this helping with the archiving part.

 

I don't particularly like doing the vacuum full, but when it will release 20-50% of disk space for a large table, then it's something we live with. As I understand, a normal vacuum won't release all the old pages that a "full" does, hence why we have to do that. It's painful enough I've restricted it to once quarter; I'd do it only once a year if I thought I could get away with it. Still this is something I'll put on the list to go research with practical trials. I don't think the lock for the vacuuming hurts us, but I've heard of pg_repack and I'll look into that too.

 

I have considered (like they say with vacuuming) that more often might be better. Of course that would mean doing some of this during the day when the DB is busier. Hmm, maybe 1000/minute wouldn't hurt and that would shorten the nightly run significantly. I may have to try that and see if it just adds to background noise or causes problems.

 

Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

Re: how to slow down parts of Pg

От
Michael Loftis
Дата:


On Tue, Apr 21, 2020 at 15:05 Kevin Brannen <KBrannen@efji.com> wrote:

From: Michael Lewis <mlewis@entrata.com>

> You say 12.2 is in testing but what are you using now? Have you tuned configs much? Would you be able to implement partitioning such that your deletes become truncates or simply a detaching of the old partition? Generally if you are doing a vacuum full, you perhaps need to tune autovacuum to be more aggressive. Consider pg_repack at least to avoid taking an exclusive lock for the entire duration. If partitioning is not an option, could you delete old records hourly rather than daily?

 

Good questions, it's always hard to know how much to include. 😊

 

Current production is 9.6, so things like partitioning aren't available there, but will be in the future.

 

We've tuned the configs some and don't having any issues with Pg at the moment. This does need to be relooked at; I have a few notes of things to revisit as our hardware changes.

 

Partitioning our larger tables by time is on the ToDo list. I hadn't thought about that helping with maintenance, so thanks for bringing that up. I'll increase the priority of this work as I can see this helping with the archiving part.

 

I don't particularly like doing the vacuum full, but when it will release 20-50% of disk space for a large table, then it's something we live with. As I understand, a normal vacuum won't release all the old pages that a "full" does, hence why we have to do that. It's painful enough I've restricted it to once quarter; I'd do it only once a year if I thought I could get away with it. Still this is something I'll put on the list to go research with practical trials. I don't think the lock for the vacuuming hurts us, but I've heard of pg_repack and I'll look into that too.



Why do vacuum full at all? A functional autovacuum will return the free pages to be reused. You just won’t see the reduction in disk usage at the OS level. Since the pages are clearly going to be used it doesn’t really make sense to do a vacuum full at all. Let autovacuum do it’s job or if that’s not keeping up a normal vacuum without the full. The on dusk sizes will stabilize and you’ll not be doing a ton of extra I/O to rewrite tables.

 

I have considered (like they say with vacuuming) that more often might be better. Of course that would mean doing some of this during the day when the DB is busier. Hmm, maybe 1000/minute wouldn't hurt and that would shorten the nightly run significantly. I may have to try that and see if it just adds to background noise or causes problems.

 

Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.
--

"Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds."
-- Samuel Butler

RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:

From: Michael Loftis <mloftis@wgops.com>

 

>>From: Kevn Brannen

>>I have an unusual need:  I need Pg to slow down. I know, we all want our DB to go faster, but in this case it's speed is working against me in 1 area.

>>

>>We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync...

 

> drbdsetup allows you to control the sync rates.

 

I was hoping not to have to do that, but the more I think about this I'm realizing that it won't hurt because the network cap is effectively limiting me anyway. :)

 

I can & will do this, maybe at 90% of our bandwidth, so thanks for the suggestion. Still, this is sort of a last resort thing as I believe controlling the DB to be the ultimate need.

 

Thanks!

Kevin

.

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:

From: Michael Loftis <mloftis@wgops.com>

 

>>From: Kevn Brannen

>> I don't particularly like doing the vacuum full, but when it will release 20-50% of disk space for a large table, then it's something we live with. As I understand, a normal vacuum won't release all the old pages that a "full" does, hence why we have to do that. It's painful enough I've restricted it to once quarter; I'd do it only once a year if I thought I could get away with it. Still this is something I'll put on the list to go research with practical trials. I don't think the lock for the vacuuming hurts us, but I've heard of pg_repack and I'll look into that too.

 

 

> Why do vacuum full at all? A functional autovacuum will return the free pages to be reused. You just won’t see the reduction in disk usage at the OS level. Since the pages are clearly going to be used it doesn’t really make sense to do a vacuum full at all. Let autovacuum do it’s job or if that’s not keeping up a normal vacuum without the full. The on dusk sizes will stabilize and you’ll not be doing a ton of extra I/O to rewrite tables.

 

 

Sometimes I need the disk space back. It also makes me feel better. (OK, this may not a good reason but there is a hint of truth in this.) What this probably means is that I need to get a better understanding of vacuuming.

 

Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

Re: how to slow down parts of Pg

От
Virendra Kumar
Дата:
Autovacuum does takes care of dead tuples and return space to table's allocated size and can be re-used by fresh incoming rows or any updates.

Index bloat is still not being taken care of by autovacuum process. You should use pg_repack to do index rebuild. Keep in mind that pg_repack requires double the space of indexes, since there will be two indexes existing during rebuild processes.

Regards,
Virendra Kumar

On Tuesday, April 21, 2020, 2:26:11 PM PDT, Kevin Brannen <kbrannen@efji.com> wrote:


From: Michael Loftis <mloftis@wgops.com>

 

>>From: Kevn Brannen

>> I don't particularly like doing the vacuum full, but when it will release 20-50% of disk space for a large table, then it's something we live with. As I understand, a normal vacuum won't release all the old pages that a "full" does, hence why we have to do that. It's painful enough I've restricted it to once quarter; I'd do it only once a year if I thought I could get away with it. Still this is something I'll put on the list to go research with practical trials. I don't think the lock for the vacuuming hurts us, but I've heard of pg_repack and I'll look into that too.

 

 

> Why do vacuum full at all? A functional autovacuum will return the free pages to be reused. You just won’t see the reduction in disk usage at the OS level. Since the pages are clearly going to be used it doesn’t really make sense to do a vacuum full at all. Let autovacuum do it’s job or if that’s not keeping up a normal vacuum without the full. The on dusk sizes will stabilize and you’ll not be doing a ton of extra I/O to rewrite tables.

 

 

Sometimes I need the disk space back. It also makes me feel better. (OK, this may not a good reason but there is a hint of truth in this.) What this probably means is that I need to get a better understanding of vacuuming.

 

Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

Re: how to slow down parts of Pg

От
"David G. Johnston"
Дата:
On Tue, Apr 21, 2020 at 2:25 PM Kevin Brannen <KBrannen@efji.com> wrote:

Sometimes I need the disk space back. It also makes me feel better. (OK, this may not a good reason but there is a hint of truth in this.) What this probably means is that I need to get a better understanding of vacuuming.


Imagine you have an auto-expanding array and also that individual cells can be reused if the data in them is removed first.  Deletion marks a cell as needing to be cleaned up (i.e., data removed).  Vacuum actually performs the cleaning.  Insertion causes the system to either provide an existing, cleaned/empty, cell OR to add a new cell to the end of the array and provide that.  The longer the deleted cells go uncleaned the more cells that are added onto the end of the array and the more physical space the array takes up.  If you clean up the deleted cells more frequently they can be reused in lieu of expanding the array.

Vacuum full counts the number of non-empty cells in the array, creates a new array with that many cells, and writes the non-empty values into it - then removes the old array.

Immediately after you perform a normal vacuum you have lots of empty cells - but you know that the space is going to be filled in again soon so it doesn't normally make sense to "resize the array".

David J.

RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:

From: Virendra Kumar <viru_7683@yahoo.com>

 

>Autovacuum does takes care of dead tuples and return space to table's allocated size and can be re-used by fresh incoming rows or any updates.

>Index bloat is still not being taken care of by autovacuum process. You should use pg_repack to do index rebuild. Keep in mind that pg_repack requires double the space of indexes, since there will be two indexes existing during rebuild processes.

 

 

Ha! I knew there was a reason I was doing the full, I just couldn't remember indexes was why. Pg_repack needs to move higher on the ToDo list too. I need a clone to do all of this. :)

 

 

From: David G. Johnston <david.g.johnston@gmail.com>
> Imagine you have an auto-expanding array and also that individual cells can be reused if the data in them is removed first…

 

Yes, the concepts aren't that hard, the issue is how to apply them in the most effective manner. Still, nice explanation, I'll use that when explaining the work to the group so I can pass the info along.

 

Thanks!

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

Re: how to slow down parts of Pg

От
Michael Lewis
Дата:
Reviewing pg_stat_user_tables will give you an idea of how often autovacuum is cleaning up those tables that "need" that vacuum full on a quarterly basis. You can tune individual tables to have a lower threshold ratio of dead tuples so the system isn't waiting until you have 20% dead rows before vacuuming a table with millions of rows that occupies a GB or more on disk. You might consider changing your nightly analyze to a nightly vacuum analyze, at least for the tables you know can be problematic. The more dense a table is packed, the better cache_hits and other such metrics. Like making dinner, cleanup as you go.

One thing that I think is interesting is that the default cost_delay has been updated with PG12 from 20ms down to 2ms such that all things being equal, much much more work is done by autovacuum in a given second. It may be worth taking a look at.

Another great thing coming to you in PG12 is the option to do reindex concurrently. Then there's no need for pg_repack on indexes.

Good luck sir.

Re: how to slow down parts of Pg

От
Adrian Klaver
Дата:
On 4/21/20 2:32 PM, Virendra Kumar wrote:
> Autovacuum does takes care of dead tuples and return space to table's 
> allocated size and can be re-used by fresh incoming rows or any updates.
> 
> Index bloat is still not being taken care of by autovacuum process. You 
> should use pg_repack to do index rebuild. Keep in mind that pg_repack 
> requires double the space of indexes, since there will be two indexes 
> existing during rebuild processes.

You sure about that? On Postgres 12:

--2020-04-21 15:47:27.452 PDT-0DEBUG:  plant1: vac: 5154 (threshold 
1081), anl: 5154 (threshold 565)
--2020-04-21 15:47:27.452 PDT-0DEBUG:  autovac_balance_cost(pid=18701 
db=25092, rel=26497, dobalance=yes cost_limit=200, cost_limit_base=200, 
cost_delay=2)
--2020-04-21 15:47:27.452 PDT-0DEBUG:  CommitTransaction(1) name: 
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
--2020-04-21 15:47:27.452 PDT-0DEBUG:  StartTransaction(1) name: 
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
--2020-04-21 15:47:27.452 PDT-0DEBUG:  vacuuming "public.plant1"
--2020-04-21 15:47:27.504 PDT-0DEBUG:  scanned index "p_no_pkey" to 
remove 5114 row versions
--2020-04-21 15:47:27.504 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00 
s, elapsed: 0.00 s
--2020-04-21 15:47:27.514 PDT-0DEBUG:  scanned index "common_idx" to 
remove 5114 row versions
--2020-04-21 15:47:27.514 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00 
s, elapsed: 0.01 s
--2020-04-21 15:47:27.515 PDT-0DEBUG:  scanned index "genus_idx" to 
remove 5114 row versions
--2020-04-21 15:47:27.515 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00 
s, elapsed: 0.00 s
--2020-04-21 15:47:27.517 PDT-0DEBUG:  "plant1": removed 5114 row 
versions in 121 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00 
s, elapsed: 0.00 s
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "p_no_pkey" now contains 
5154 row versions in 31 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were 
removed.
         0 index pages have been deleted, 0 are currently reusable.
         CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "common_idx" now contains 
5154 row versions in 60 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were 
removed.
         0 index pages have been deleted, 0 are currently reusable.
         CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "genus_idx" now contains 
5154 row versions in 47 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were 
removed.
         0 index pages have been deleted, 0 are currently reusable.
         CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.518 PDT-0DEBUG:  "plant1": found 5154 removable, 
5154 nonremovable row versions in 195 out of 195 pages
--2020-04-21 15:47:27.518 PDT-0DETAIL:  0 dead row versions cannot be 
removed yet, oldest xmin: 9715
         There were 256 unused item identifiers.
         Skipped 0 pages due to buffer pins, 0 frozen pages.
         0 pages are entirely empty.
         CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.06 s.
--2020-04-21 15:47:27.518 PDT-0LOG:  automatic vacuum of table 
"production.public.plant1": index scans: 1
         pages: 0 removed, 195 remain, 0 skipped due to pins, 0 skipped 
frozen
         tuples: 5154 removed, 5154 remain, 0 are dead but not yet 
removable, oldest xmin: 9715
         buffer usage: 753 hits, 0 misses, 255 dirtied
         avg read rate: 0.000 MB/s, avg write rate: 30.586 MB/s
         system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.06 s



> 
> Regards,
> Virendra Kumar
> 

-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: how to slow down parts of Pg

От
Virendra Kumar
Дата:
Hi Adrian,

Here is test case, basically when autovacuum runs it did release the space to disk since it had may be continuous blocks which can be released to disk but the space used by index is still being held until I ran the reindex on the table (I assume reindex for index would work as well). Subsequent insert statement did not utilize the blocks in index segment as we can see below so index bloats are still not addressed or may be I am doing something wrong:


postgres=# select version();
                                                     version                                                     
-----------------------------------------------------------------------------------------------------------------
 PostgreSQL 12.2 on x86_64-apple-darwin18.7.0, compiled by Apple LLVM version 10.0.1 (clang-1001.0.46.4), 64-bit
(1 row)

postgres=#
postgres=# CREATE TABLE validate_pg_repack
postgres-# (
postgres(#    effectivedate                                  timestamp,
postgres(#    masterentityid                                 integer not null,
postgres(#    primaryissueid                                 varchar(65535),
postgres(#    longshortindicator                             varchar(65535),
postgres(#    pg_repack_id                                   varchar(65535)
postgres(# );
CREATE TABLE
postgres=# CREATE SEQUENCE validate_pg_repack_masterentityid_seq INCREMENT 1 START 1 OWNED BY validate_pg_repack.masterentityid;
CREATE SEQUENCE
postgres=#
postgres=# CREATE unique INDEX idx_pg_repack_masterentityid ON validate_pg_repack USING btree (masterentityid);
CREATE INDEX
postgres=#
postgres=# CREATE INDEX idx_pg_repack_effectivedate ON validate_pg_repack USING btree (effectivedate);
CREATE INDEX
postgres=#
postgres=# INSERT INTO validate_pg_repack (effectivedate,masterentityid,primaryissueid,longshortindicator,pg_repack_id) SELECT
postgres-# now() + round(random() * 1000) * '1 second' :: interval,
postgres-# nextval('validate_pg_repack_masterentityid_seq'),
postgres-# 'some-phone-' || round(random() * 65000),
postgres-# 'some-phone-' || round(random() * 1000),  
postgres-# 'some-phone-' || round(random() * 1000)
postgres-# FROM
postgres-# generate_series(1, 900000);
INSERT 0 900000
postgres=#
postgres=# select pg_sleep(30);
 pg_sleep
----------
 
(1 row)

postgres=# select relname,n_tup_ins,n_tup_del,last_autoanalyze,autoanalyze_count from pg_stat_all_tables where relname ='validate_pg_repack';
      relname       | n_tup_ins | n_tup_del |       last_autoanalyze        | autoanalyze_count
--------------------+-----------+-----------+-------------------------------+-------------------
 validate_pg_repack |    900000 |         0 | 2020-04-21 19:34:09.579475-07 |                 1
(1 row)

postgres=# select pg_size_pretty(pg_relation_size('validate_pg_repack'));
 pg_size_pretty
----------------
 80 MB
(1 row)

postgres=# \di+ idx_pg_repack_*
                                          List of relations
 Schema |             Name             | Type  |  Owner   |       Table        | Size  | Description
--------+------------------------------+-------+----------+--------------------+-------+-------------
 public | idx_pg_repack_effectivedate  | index | vvikumar | validate_pg_repack | 24 MB |
 public | idx_pg_repack_masterentityid | index | vvikumar | validate_pg_repack | 19 MB |
(2 rows)

postgres=# delete from validate_pg_repack where masterentityid > 450000;
DELETE 450000
postgres=# select pg_sleep(30);
 pg_sleep
----------
 
(1 row)

postgres=# select relname,n_tup_ins,n_tup_del,last_autoanalyze,autoanalyze_count from pg_stat_all_tables where relname ='validate_pg_repack';
      relname       | n_tup_ins | n_tup_del |       last_autoanalyze        | autoanalyze_count
--------------------+-----------+-----------+-------------------------------+-------------------
 validate_pg_repack |    900000 |    450000 | 2020-04-21 19:35:11.029405-07 |                 2
(1 row)

postgres=#
postgres=# select pg_size_pretty(pg_relation_size('validate_pg_repack'));
 pg_size_pretty
----------------
 40 MB
(1 row)

postgres=# \di+ idx_pg_repack_*
                                          List of relations
 Schema |             Name             | Type  |  Owner   |       Table        | Size  | Description
--------+------------------------------+-------+----------+--------------------+-------+-------------
 public | idx_pg_repack_effectivedate  | index | vvikumar | validate_pg_repack | 24 MB |
 public | idx_pg_repack_masterentityid | index | vvikumar | validate_pg_repack | 19 MB |
(2 rows)

postgres=# INSERT INTO validate_pg_repack (effectivedate,masterentityid,primaryissueid,longshortindicator,pg_repack_id) SELECT
postgres-# now() + round(random() * 1000) * '1 second' :: interval,
postgres-# nextval('validate_pg_repack_masterentityid_seq'),
postgres-# 'some-phone-' || round(random() * 65000),
postgres-# 'some-phone-' || round(random() * 1000),  
postgres-# 'some-phone-' || round(random() * 1000)
postgres-# FROM
postgres-# generate_series(450000, 900000);
INSERT 0 450001
postgres=# select pg_sleep(120);
 pg_sleep
----------
 
(1 row)

postgres=#
postgres=# select relname,n_tup_ins,n_tup_del,last_autoanalyze,autoanalyze_count from pg_stat_all_tables where relname ='validate_pg_repack';
      relname       | n_tup_ins | n_tup_del |       last_autoanalyze        | autoanalyze_count
--------------------+-----------+-----------+-------------------------------+-------------------
 validate_pg_repack |   1350001 |    450000 | 2020-04-21 19:37:10.829261-07 |                 3
(1 row)

postgres=#
postgres=#
postgres=# select pg_size_pretty(pg_relation_size('validate_pg_repack'));
 pg_size_pretty
----------------
 80 MB
(1 row)

postgres=# \di+ idx_pg_repack_*
                                          List of relations
 Schema |             Name             | Type  |  Owner   |       Table        | Size  | Description
--------+------------------------------+-------+----------+--------------------+-------+-------------
 public | idx_pg_repack_effectivedate  | index | vvikumar | validate_pg_repack | 39 MB |
 public | idx_pg_repack_masterentityid | index | vvikumar | validate_pg_repack | 29 MB |
(2 rows)

postgres=# reindex table CONCURRENTLY validate_pg_repack;
REINDEX
postgres=#
postgres=# select pg_size_pretty(pg_relation_size('validate_pg_repack'));
 pg_size_pretty
----------------
 80 MB
(1 row)

postgres=# \di+ idx_pg_repack_*
                                          List of relations
 Schema |             Name             | Type  |  Owner   |       Table        | Size  | Description
--------+------------------------------+-------+----------+--------------------+-------+-------------
 public | idx_pg_repack_effectivedate  | index | vvikumar | validate_pg_repack | 19 MB |
 public | idx_pg_repack_masterentityid | index | vvikumar | validate_pg_repack | 19 MB |
(2 rows)

postgres=#
postgres=# drop table validate_pg_repack cascade;
DROP TABLE
postgres=#
postgres=#
postgres=#



Regards,
Virendra Kumar

On Tuesday, April 21, 2020, 3:54:13 PM PDT, Adrian Klaver <adrian.klaver@aklaver.com> wrote:


On 4/21/20 2:32 PM, Virendra Kumar wrote:
> Autovacuum does takes care of dead tuples and return space to table's
> allocated size and can be re-used by fresh incoming rows or any updates.
>
> Index bloat is still not being taken care of by autovacuum process. You
> should use pg_repack to do index rebuild. Keep in mind that pg_repack
> requires double the space of indexes, since there will be two indexes
> existing during rebuild processes.

You sure about that? On Postgres 12:

--2020-04-21 15:47:27.452 PDT-0DEBUG:  plant1: vac: 5154 (threshold
1081), anl: 5154 (threshold 565)
--2020-04-21 15:47:27.452 PDT-0DEBUG:  autovac_balance_cost(pid=18701
db=25092, rel=26497, dobalance=yes cost_limit=200, cost_limit_base=200,
cost_delay=2)
--2020-04-21 15:47:27.452 PDT-0DEBUG:  CommitTransaction(1) name:
unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
--2020-04-21 15:47:27.452 PDT-0DEBUG:  StartTransaction(1) name:
unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
--2020-04-21 15:47:27.452 PDT-0DEBUG:  vacuuming "public.plant1"
--2020-04-21 15:47:27.504 PDT-0DEBUG:  scanned index "p_no_pkey" to
remove 5114 row versions
--2020-04-21 15:47:27.504 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00
s, elapsed: 0.00 s
--2020-04-21 15:47:27.514 PDT-0DEBUG:  scanned index "common_idx" to
remove 5114 row versions
--2020-04-21 15:47:27.514 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00
s, elapsed: 0.01 s
--2020-04-21 15:47:27.515 PDT-0DEBUG:  scanned index "genus_idx" to
remove 5114 row versions
--2020-04-21 15:47:27.515 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00
s, elapsed: 0.00 s
--2020-04-21 15:47:27.517 PDT-0DEBUG:  "plant1": removed 5114 row
versions in 121 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  CPU: user: 0.00 s, system: 0.00
s, elapsed: 0.00 s
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "p_no_pkey" now contains
5154 row versions in 31 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were
removed.
        0 index pages have been deleted, 0 are currently reusable.
        CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "common_idx" now contains
5154 row versions in 60 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were
removed.
        0 index pages have been deleted, 0 are currently reusable.
        CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.517 PDT-0DEBUG:  index "genus_idx" now contains
5154 row versions in 47 pages
--2020-04-21 15:47:27.517 PDT-0DETAIL:  5114 index row versions were
removed.
        0 index pages have been deleted, 0 are currently reusable.
        CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
--2020-04-21 15:47:27.518 PDT-0DEBUG:  "plant1": found 5154 removable,
5154 nonremovable row versions in 195 out of 195 pages
--2020-04-21 15:47:27.518 PDT-0DETAIL:  0 dead row versions cannot be
removed yet, oldest xmin: 9715
        There were 256 unused item identifiers.
        Skipped 0 pages due to buffer pins, 0 frozen pages.
        0 pages are entirely empty.
        CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.06 s.
--2020-04-21 15:47:27.518 PDT-0LOG:  automatic vacuum of table
"production.public.plant1": index scans: 1
        pages: 0 removed, 195 remain, 0 skipped due to pins, 0 skipped
frozen
        tuples: 5154 removed, 5154 remain, 0 are dead but not yet
removable, oldest xmin: 9715
        buffer usage: 753 hits, 0 misses, 255 dirtied
        avg read rate: 0.000 MB/s, avg write rate: 30.586 MB/s
        system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.06 s




>
> Regards,
> Virendra Kumar

>

--
Adrian Klaver
adrian.klaver@aklaver.com

Вложения

Re: how to slow down parts of Pg

От
Laurenz Albe
Дата:
On Tue, 2020-04-21 at 20:30 +0000, Kevin Brannen wrote:
> I have an unusual need:  I need Pg to slow down. I know, we all want our DB to go faster,
> but in this case it's speed is working against me in 1 area.
> 
> We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync,
> which it does at the block level. For normal operations, it actually works out fairly well. [...]
> The part that hurts so bad is when we do maintenance operations that are DB heavy, like deleting really old records
outof
 
> archives (weekly), moving older records from current tables to archive tables plus an analyze (every night),
> running pg_backup (every night), other archiving (weekly), and vacuum full to remove bloat (once a quarter).
> All of this generates a lot of disk writes, to state the obvious.
> 
> The local server can handle it all just fine, but the network can't handle it as it tries to sync to the other
server.

The obvious and best answer is: get a faster network, or choose a different
storage solution.

Other than that, you can try to make the maintainance operations less
resource intense:

- partition the tables so that you can get rid of old data with DROP TABLE.
  The ANALYZE won't hurt, if you treat only the required tables.
- use "pg_basebackup" with the "--max-rate" option

About VACUUM, you may have a problem.  Avoid the need for VACUUM (FULL) at any price.
That usually requires tuning autovacuum to be faster, which means using more I/O.

If you cannot find a sweet spot there, you have no alternative but getting better I/O
(which, as I said in the beginning, would be the correct solution anyway).

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:
>From: Laurenz Albe <laurenz.albe@cybertec.at>
>
>>On Tue, 2020-04-21 at 20:30 +0000, Kevin Brannen wrote:
>> I have an unusual need:  I need Pg to slow down. I know, we all want
>> our DB to go faster, but in this case it's speed is working against me in 1 area.
>>
>> [...] The part that hurts so bad is when we do maintenance operations
>> that are DB heavy, like deleting really old records out of archives (weekly), moving older records from current
tables
>> to archive tables plus an analyze (every night), running pg_backup (every night), other archiving (weekly), and
vacuumfull to remove bloat (once a quarter).
 
>> All of this generates a lot of disk writes, to state the obvious.
>>
>> The local server can handle it all just fine, but the network can't handle it as it tries to sync to the other
server.
>
>The obvious and best answer is: get a faster network, or choose a different storage solution.

I believe I mention originally that the network is controlled by others (the customer). I've pointed out the results of
theirchoice
 
repeatedly, but their reply is always "budgets", and I reply as politely as I can, "faster network or live with the
slownessas I've
 
done all I can for now". It's a somewhat frustrating conversation as you can imagine.


>Other than that, you can try to make the maintainance operations less resource intense:
>
>- partition the tables so that you can get rid of old data with DROP TABLE.
>  The ANALYZE won't hurt, if you treat only the required tables.
>- use "pg_basebackup" with the "--max-rate" option

Yes, this was the heart of the post, how to use less resources. I'd always thought of partitioning the larger tables as
anoptimization
 
for running reports, but my eyes have been opened that it has other benefits too.

I'm not sure changing the backup program will help when it's sitting on top of DRBD, but I can limit DRBD's rate to
createthe same effect.
 
Still, it doesn't hurt to spend a little time researching this. OTOH, you did just prompt an idea, so that's helpful
too.


>About VACUUM, you may have a problem.  Avoid the need for VACUUM (FULL) at any price.
>That usually requires tuning autovacuum to be faster, which means using more I/O.

OK, I've never really liked doing a "full", but I perceived it as helpful to us. I'll see about making autovacuum more
aggressive.

Thanks!
Kevin
.
This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential
information.If you are not the intended recipient, or a person responsible for delivering it to the intended recipient,
youare hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or
attachedto this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately
notifyus by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them
todisk. Thank you.
 

Re: how to slow down parts of Pg

От
Adrian Klaver
Дата:
On 4/21/20 7:43 PM, Virendra Kumar wrote:
> Hi Adrian,
> 
> Here is test case, basically when autovacuum runs it did release the 
> space to disk since it had may be continuous blocks which can be 
> released to disk but the space used by index is still being held until I 
> ran the reindex on the table (I assume reindex for index would work as 
> well). Subsequent insert statement did not utilize the blocks in index 
> segment as we can see below so index bloats are still not addressed or 
> may be I am doing something wrong:
> 


Well I learned something. I replicated your commands on my 12 instance 
and got the exact same results. I should have read this:

https://www.postgresql.org/docs/12/routine-reindex.html

before.


-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: how to slow down parts of Pg

От
Ron
Дата:
What you need is async replication instead of synchronous replication.

On 4/21/20 3:30 PM, Kevin Brannen wrote:

I have an unusual need:  I need Pg to slow down. I know, we all want our DB to go faster, but in this case it's speed is working against me in 1 area.

 

We have systems that are geo-redundant for HA, with the redundancy being handled by DRBD to keep the disks in sync, which it does at the block level. For normal operations, it actually works out fairly well. That said, we recognize that what we really need to do is one of the forms of streaming (ch 26 of the manual) which I believe would help this problem a lot if not solve it -- but we don't have the time to do that at the moment. I plan and hope to get there by the end of the year. The part that hurts so bad is when we do maintenance operations that are DB heavy, like deleting really old records out of archives (weekly), moving older records from current tables to archive tables plus an analyze (every night), running pg_backup (every night), other archiving (weekly), and vacuum full to remove bloat (once a quarter). All of this generates a lot of disk writes, to state the obvious.

 

The local server can handle it all just fine, but the network can't handle it as it tries to sync to the other server. Sometimes we can add network bandwidth, many times we can't as it depends on others. To borrow a phrase from the current times, we need to flatten the curve. 😊

 

A few parts of our maintenance process I've tamed by doing "nice -20" on the process (e.g. log rotation); but I can't really do that for Pg because the work gets handed off to a background process that's not a direct child process … and I don't want to slow the DB as a whole because other work is going on (like handling incoming data).

 

Part of the process I've slowed down by doing the work in chunks of 10K rows at a time with a pause between each chunk to allow the network to catch up (instead of an entire table in 1 statement). This sort of works, but some work/SQL is between hard to next-to-impossible to break up like that. That also produces some hard spikes, but that's better than the alternative (next sentence). Still, large portions of the process are hard to control and just punch the network to full capacity and hold it there for far too long.

 

So, do I have any other options to help slow down some of the Pg operations? Or maybe some other short-term mitigations we can do with Pg configurations? Or is this a case where we've already done all we can do and the only answer is move to WAL streaming as fast as possible?

 

If it matters, this is being run on Linux servers. Pg 12.2 is in final testing and will be rolled out to production soon -- so feel free to offer suggestions that only apply to 12.x.

 

Thanks,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.

--
Angular momentum makes the world go 'round.

Re: how to slow down parts of Pg

От
"Peter J. Holzer"
Дата:
On 2020-04-21 21:16:57 +0000, Kevin Brannen wrote:
> From: Michael Loftis <mloftis@wgops.com>
> > drbdsetup allows you to control the sync rates.
>
> I was hoping not to have to do that, but the more I think about this I'm
> realizing that it won't hurt because the network cap is effectively limiting me
> anyway. :)

Alternatively you might consider traffic shaping. DRBD can only set a
fixed limit (because it knows only about its own traffic). Traffic
shaping can adjust the limit depending on other traffic (it can also
prioritize traffic, etc.). However, to be effective, it needs to run on
a router as close to the bottleneck as possible - typically that means
either the border router or the firewall. So it is something the
customer's network guy should set up.

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Вложения

RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:
>From: Peter J. Holzer <hjp-pgsql@hjp.at>
>On 2020-04-21 21:16:57 +0000, Kevin Brannen wrote:
>> From: Michael Loftis <mloftis@wgops.com>
>> > drbdsetup allows you to control the sync rates.
>>
>> I was hoping not to have to do that, but the more I think about this
>> I'm realizing that it won't hurt because the network cap is
>> effectively limiting me anyway. :)
>
>Alternatively you might consider traffic shaping. DRBD can only set a fixed limit (because it knows only about its own
traffic).Traffic shaping can adjust the limit depending on other traffic (it can also prioritize traffic, etc.).
However,to be effective, it needs to run on a router as close to the bottleneck as possible - typically that means
eitherthe border router or the firewall. So it is something the customer's network guy should set up. 


Traffic shaping was actually my very first thought. :) It has its upsides
and downsides like any other solution. The biggest downside is that it's not
up to us to control, so we have to find their network person (usually harder
than it should be) and then adjust ... probably multiple times and always
manually.


For any who are wondering what this thread has done for me, other than create
a list of things to research... :)

At this point in time, I think the plan is to (roughly in this order):

0. limit DRBD's rate (I think I can script this & I probably only need to do this during the maintenance work);
1. make autovac more aggressive on the larger logging tables;
2. change the "vacuum full" to just reindexing (either with pg_repack or "reindex concurrently");
3. partition the bigger logging tables.

I'm tempted to also do the archiving in very small amounts all thru the day
(sort of like how autovac works) to spread that load and not have such a huge
hit once per day. For the moment, this is going in my back pocket to pull out
only if the above doesn't do enough.

Then we move to WAL streaming which I believe will be the biggest
help of all -- or so I hope. It will also have the largest learning curve,
but it'll be good for me to learn that.

Thanks,
Kevin
.
This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential
information.If you are not the intended recipient, or a person responsible for delivering it to the intended recipient,
youare hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or
attachedto this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately
notifyus by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them
todisk. Thank you. 



RE: how to slow down parts of Pg

От
Kevin Brannen
Дата:

>From: Ron <ronljohnsonjr@gmail.com>

>What you need is async replication instead of synchronous replication.

 

 

The only way I can think of to do that in our present situation would be to

buy DRBD-Proxy, which becomes a single-point-of-failure and goes against the

idea of HA (it seems like a good product for disaster recovery but that's not

the goal). In addition, since we're trying to move away from DRBD and go to

WAL streaming, that doesn't seem like the best use of time and money. :)

 

If you'd like to expound on other ways/tools to do that, I'd love to hear about it,

although this might be better off-list.

 

Thanks,

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages attached to it, may contain confidential information. If you are not the intended recipient, or a person responsible for delivering it to the intended recipient, you are hereby notified that any disclosure, distribution, review, copy or use of any of the information contained in or attached to this message is STRICTLY PROHIBITED. If you have received this transmission in error, please immediately notify us by reply e-mail, and destroy the original transmission and its attachments without reading them or saving them to disk. Thank you.