Re: Upgrade from PG12 to PG
| От | Jef Mortelle |
|---|---|
| Тема | Re: Upgrade from PG12 to PG |
| Дата | |
| Msg-id | dc88d14d-6d0f-2d67-ecfc-c7495bf1c22b@gmail.com обсуждение |
| Ответ на | Re: Upgrade from PG12 to PG (Ilya Kosmodemiansky <ik@dataegret.com>) |
| Ответы |
Re: Upgrade from PG12 to PG
Re: Upgrade from PG12 to PG Re: Upgrade from PG12 to PG |
| Список | pgsql-admin |
Hi,
Many thanks for your answer.
So: not possible to have very little downtime if you have a database
with al lot rows containing text as datatype, as pg_upgrade needs 12hr
for 24 milj rows in pg_largeobject.
Testing now with pg_dumpall en pg_restore ....
I think, postgresql should take this in high priority to resolve this
problem.
I have to make a choice in the near future: Postgres or Oracle, and that
database would have a lot of datatype text.
Database would have 1 TB.
It seems me a little bit tricky/dangerous to use Postgres, just for
being able to upgrade to a newer version.
Kind regards.
On 20/07/2023 13:43, Ilya Kosmodemiansky wrote:
> Hi Jef,
>
>
> On Thu, Jul 20, 2023 at 1:23 PM Jef Mortelle <jefmortelle@gmail.com> wrote:
>> Looking at the dump file: man many lines like SELECT
>> pg_catalog.lo_unlink('100000');
>>
>>
>> I have the same issue with /usr/lib/postgresql15/bin/pg_upgrade -v -p
>> 5431 -P 5432 -k
>>
>>
>> Whats going on ?
> pg_upgrade is known to be problematic with large objects.
> Please take a look here to start with:
> https://www.postgresql.org/message-id/20210309200819.GO2021%40telsasoft.com
>
>>
>> Kind regards
>>
>>
>>
В списке pgsql-admin по дате отправления: