Обсуждение: [PATCH] vacuumlo: print the number of large objects going to be removed

Поиск
Список
Период
Сортировка

[PATCH] vacuumlo: print the number of large objects going to be removed

От
Timur Birsh
Дата:
Hello,

If tables has a lot of rows with large objects (>1_000_000) that
removed throughout the day, it would be useful to know how many
LOs going to be removed.

First patch - print the number of large objects going to be removed,
second patch - print how many LOs removed in percent.

Can anyone please review.

Please cc, I am not subscribed to the list.

Regards,
Timur
Вложения

Re: [PATCH] vacuumlo: print the number of large objects going to be removed

От
Timur Birsh
Дата:
12.06.2019, 14:31, "Timur Birsh" <taem@linukz.org>:
> Please cc, I am not subscribed to the list.

I have subscribed to the mailing list, there is no need to cc me.

Thank you.



Re: [PATCH] vacuumlo: print the number of large objects going to beremoved

От
Michael Paquier
Дата:
Hi,

On Thu, Jun 13, 2019 at 10:49:46AM +0600, Timur Birsh wrote:
> 12.06.2019, 14:31, "Timur Birsh" <taem@linukz.org>:
>> Please cc, I am not subscribed to the list.
>
> I have subscribed to the mailing list, there is no need to cc me.

Welcome.  Nice to see that you have subscribed to the lists.

Please note that we have some guidelines regarding the way patches are
submitted:
https://wiki.postgresql.org/wiki/Submitting_a_Patch
Based on what I can see with your patch, things are in good shape on
this side.

Now, if you want to get review for your patch, you should register it
in what we call the commit fest app, which is here:
https://commitfest.postgresql.org/23/

Commit fests happen every two months for a duration of one month, and
the next one which will begin the development cycle of v13 begins on
the 1st of July.  As a basic rule, it is expected that for one patch
submitted, you should review another patch of equal difficulty to keep
some balance in the force.

Regarding the patch, there is an argument to be made for reporting a
rate as well as the actual numbers of deleted and to-delete items.

+       if (param->verbose)
+       {
+               snprintf(buf, BUFSIZE, "SELECT count(*) FROM vacuum_l");
+               res = PQexec(conn, buf);
That part is costly.

Thanks!
--
Michael

Вложения

Re: [PATCH] vacuumlo: print the number of large objects going to be removed

От
Timur Birsh
Дата:
Hello Michael,

13.06.2019, 12:11, "Michael Paquier" <michael@paquier.xyz>:
> Welcome. Nice to see that you have subscribed to the lists.

Thank you for your explanations!

> Now, if you want to get review for your patch, you should register it
> in what we call the commit fest app, which is here:
> https://commitfest.postgresql.org/23/

Done. Please see https://commitfest.postgresql.org/23/2148/

> Commit fests happen every two months for a duration of one month, and
> the next one which will begin the development cycle of v13 begins on
> the 1st of July. As a basic rule, it is expected that for one patch
> submitted, you should review another patch of equal difficulty to keep
> some balance in the force.

Ok.

> Regarding the patch, there is an argument to be made for reporting a
> rate as well as the actual numbers of deleted and to-delete items.
>
> + if (param->verbose)
> + {
> + snprintf(buf, BUFSIZE, "SELECT count(*) FROM vacuum_l");
> + res = PQexec(conn, buf);
> That part is costly.

Just to be sure, a new command line argument needs to be added for
reporting the numbers? Should it implies --verbose argument?

Thanks,
Timur



Re: [PATCH] vacuumlo: print the number of large objects going to beremoved

От
Michael Paquier
Дата:
On Thu, Jun 13, 2019 at 01:25:38PM +0600, Timur Birsh wrote:
> Just to be sure, a new command line argument needs to be added for
> reporting the numbers? Should it implies --verbose argument?

Nope.  I mean that running a SELECT count(*) can be costly for many
items.
--
Michael

Вложения

Re: [PATCH] vacuumlo: print the number of large objects going to be removed

От
Timur Birsh
Дата:
13.06.2019, 13:57, "Michael Paquier" <michael@paquier.xyz>:
> On Thu, Jun 13, 2019 at 01:25:38PM +0600, Timur Birsh wrote:
>>  Just to be sure, a new command line argument needs to be added for
>>  reporting the numbers? Should it implies --verbose argument?
>
> Nope. I mean that running a SELECT count(*) can be costly for many
> items.

Understood, thanks.

I found a way to get the number of LOs that will be removed without
the SELECT count(*) - PQcmdTuples(). Please find attached patch v2.
I fixed some indentation in the variable declaration blocks.

There is a database with tables that have a lot of tuples with large objects:

# select count(*) from pg_largeobject_metadata;
  count
----------
 44707424
(1 row)

An application that uses this database from time to time deletes and adds a lot
of rows, it happens that more than 10,000,000 orphaned LOs remain in the
database. Removing such a number of items takes a long time.
I guess, it would be helpful to know how many LOs going to be removed and
report deleted percentage.

Thanks,
Timur
Вложения

Re: [PATCH] vacuumlo: print the number of large objects going to beremoved

От
"Daniel Verite"
Дата:
    Timur Birsh wrote:

> Please find attached patch v2.
> I fixed some indentation in the variable declaration blocks.

The tab width should be 4. Please have a look at
https://www.postgresql.org/docs/current/source-format.html
It also explains why opportunistic reformatting is futile, anyway:

  "Your code will get run through pgindent before the next release, so
  there's no point in making it look nice under some other set of
  formatting conventions. A good rule of thumb for patches is “make
  the new code look like the existing code around it”."

> An application that uses this database from time to time deletes and
> adds a lot of rows, it happens that more than 10,000,000 orphaned
> LOs remain in the database. Removing such a number of items takes a
> long time.

It might be useful to display the progress report in the loop, but
it appears that even when there's nothing to remove, vacuumlo is
likely to take a long time, because of the method it uses:

#1. it builds a temp table with the OIDs of all large objects.

#2. for each non-system OID column in the db, it deletes from the temp
    table each value existing under that column, assuming that it's a
    reference to a large object (incidentally if you have OID columns
    that don't refer to large objects in your schemas, they get
    dragged into this. Also in case of OID reuse and bad luck they may
    permanently block the removal of some orphaned large objects).

#3. it creates a holdable cursor to iterate on the temp table.

#4. finally it calls lo_unlink() on each remaining OID in batched
    transactions.

The design with #1 and #2 dates back from the very first version,
in 1999.
Nowadays, maybe we could skip these steps by creating a cursor
directly for a generated query that would look like this:

   SELECT oid FROM pg_largeobject_metadata lo WHERE NOT EXISTS (
      SELECT 1 FROM schema1.tablename1 WHERE oid_column1 = lo.oid
       UNION ALL
      SELECT 1 FROM schema2.tablename2 WHERE oid_column2 = lo.oid
       UNION ALL
      ...
    );

That might be much faster than #1 and #2, especially in the case when
there's only one SELECT in that subquery and no UNION ALL is even
necessary.

For #4, a more modern approach could be to move that step into a
server-side DO block or a procedure, as transaction control is
available in them since version 11. This would avoid one client-server
round-trip per LO to delete, plus the round trips for the
cursor fetches. In the mentioned case of millions of objects to
unlink, that might be significant. In this case, progress report would
have to be done with RAISE NOTICE or some such.

In fact, this leads to another idea that vacuumlo as a client-side app
could be obsoleted and replaced by a paragraph in the doc
with a skeleton of an implementation in a DO block,
in which a user could replace the blind search in all OID
columns by a custom subquery targeting specifically their schema.
As a code block, it would be directly embeddable in a psql script or
in a procedure called by pg_cron or any equivalent tool.


Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: [PATCH] vacuumlo: print the number of large objects going to beremoved

От
Michael Paquier
Дата:
On Wed, Jul 17, 2019 at 01:31:05PM +0200, Daniel Verite wrote:
> The tab width should be 4. Please have a look at
> https://www.postgresql.org/docs/current/source-format.html
> It also explains why opportunistic reformatting is futile, anyway:

-       char       *schema,
-                  *table,
-                  *field;
+       char    *schema,
+           *table,
+           *field;
The patch has some noise.  For something of this size, I don't think
that it is an issue though ;)

> It might be useful to display the progress report in the loop, but
> it appears that even when there's nothing to remove, vacuumlo is
> likely to take a long time, because of the method it uses:
>
> [stuff]
>
> That might be much faster than #1 and #2, especially in the case when
> there's only one SELECT in that subquery and no UNION ALL is even
> necessary.

Sure.  However do we need to introduce this much complication as a
goal for this patch though whose goal is just to provide hints about
the progress of the work done by vacuumlo?  I have just looked at the
latest patch and the thing is actually much more simple than what I
recalled.

One comment I have is if we should also report in the progress not
only the percentage, but also the raw numbers of deleted entries with
the total numbers of entries to delete.  Timur, what do you think?
--
Michael

Вложения

Re: [PATCH] vacuumlo: print the number of large objects going to be removed

От
"Daniel Verite"
Дата:
    Michael Paquier wrote:

> Sure.  However do we need to introduce this much complication as a
> goal for this patch though whose goal is just to provide hints about
> the progress of the work done by vacuumlo?

Yeah, I went off on a tangent when realizing that ~500 lines of C
client-side code in vacuumlo could be turned into ~50 lines of
plpgsql in a block.
That was not meant as on objection to the patch
(besides I followed the plpgsql approach and got disappointed with the
performance of lo_unlink() in a loop compared to the client-side
equivalent, so I won't bother -hackers with this idea anymore, until I
figure out why it's not faster and if I can do something about it).

One comment about the patch:

+    long        to_delete = 0;
...
+    to_delete = strtol(PQcmdTuples(res), NULL, 10);

I believe the maximum number of large objects is almost 2^32, and as a
count above 2^31 may not fit into a signed long, shouldn't we use
an unsigned long instead? This would also apply to the preexisting
"deleted" variable.


Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite