Re: [Proposal] Global temporary tables

Поиск
Список
Период
Сортировка
От Pavel Stehule
Тема Re: [Proposal] Global temporary tables
Дата
Msg-id CAFj8pRAvegZoQCLw3vjHZ=6_Kr7-1UAA5QgGYOshhXU6j_RuCg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [Proposal] Global temporary tables  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Ответы Re: [Proposal] Global temporary tables
Список pgsql-hackers


pá 24. 1. 2020 v 14:17 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:


On 24.01.2020 15:15, Pavel Stehule wrote:
You will see a effect of DDL in current session (where you did the change), all other sessions should to live without any any change do reconnect or to RESET connect

Why? I found this requirement quit unnatural and contradicting to the behavior of normal tables.
Actually one of motivation for adding global tempo tables to Postgres is to provide compatibility with Oracle.
Although I know that Oracle design decisions were never considered as  axioms by Postgres community,
but ni case of GTT design I think that we should take in account Oracle approach.
And GTT in Oracle behaves exactly as in my implementation:

https://www.oracletutorial.com/oracle-basics/oracle-global-temporary-table/

It is not clear from this documentation whether index created for GTT in one session can be used in another session which already has some data in this GTT.
But I did experiment with install Oracle server and  can confirm that actually works in this way.

So I do not understand why do we need to complicate our GTT implementation in order to prohibit useful functionality and introduce inconsistency between behavior of normal and global temp tables.



I don't like 2 - when I do index on global temp table, I don't would to wait on indexing on all other sessions. These operations should be maximally independent.


Nobody suggest to wait building index in all sessions.
Indexes will be constructed on demand when session access this table.
If session will no access this table at all, then index will never be constructed.

Once again: logic of dealing with indexes in GTT is very simple.
For normal tables, indexes are initialized at the tame when them are created.
For GTT it is not true. We have to initialize index on demand when it is accessed first time in session.

So it has to be handled in any way.
The question is only whether we should allow creation of index for table already populated with some data?
Actually doesn't require some additional efforts. We can use existed build_index function which initialize index and populates it with data.
So the solution proposed for me is most natural, convenient and simplest solution at the same time. And compatible with Oracle.

I cannot to evaluate your proposal, and I am sure, so you know more about this code.

There is a question if we can allow to build local temp index on global temp table. It is different situation. When I work with global properties personally I prefer total asynchronous implementation of any DDL operations for other than current session. When it is true, then I have not any objection. For me, good enough design of any DDL can be based on catalog change without forcing to living tables.

I see following disadvantage of your proposal. See scenario

1. I have two sessions

A - small GTT with active owner
B - big GTT with some active application.

session A will do new index - it is fast, but if creating index is forced on B on demand (when B was touched), then this operation have to wait after index will be created.

So I afraid build a index on other sessions on GTT when GTT tables in other sessions will not be empty.

Regards

Pavel

 




Regards

Pavel
 


-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Justin Pryzby
Дата:
Сообщение: Re: error context for vacuum to include block number
Следующее
От: Mark Dilger
Дата:
Сообщение: Re: making the backend's json parser work in frontend code