Re: Decomposing xml into table

Поиск
Список
Период
Сортировка
От Surafel Temesgen
Тема Re: Decomposing xml into table
Дата
Msg-id CALAY4q826YiwYEn4f5oV=cZDeDMmqUkm130zBRaOqeEQk3F-fQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Decomposing xml into table  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Hey Tom

On Mon, Jun 22, 2020 at 10:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Big -1 on that.  COPY is not for general-purpose data transformation.
The more unrelated features we load onto it, the slower it will get,
and probably also the more buggy and unmaintainable. 

what new format handling takes to add regards to performance is a check to a few place and I don’t think that have noticeable performance impact and as far as I can see copy is extendable by design and I don’t think adding additional format will be a huge undertaking

 
There's also a
really fundamental mismatch, in that COPY is designed to do row-by-row
processing with essentially no cross-row state.  How would you square
that with the inherently nested nature of XML?


In xml case the difference is row delimiter . In xml mode user specifies row delimiter tag name and starting from start tag of specified name up to its end tag treated as single row and every text content in between will be our columns value filed

 

The big-picture question here, though, is why expend effort on XML at all?
It seems like JSON is where it's at these days for that problem space.

there are a legacy systems and I think xml is still popular

regards
Surafel  

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Josef Šimánek
Дата:
Сообщение: Re: [PATCH] Initial progress reporting for COPY command
Следующее
От: Ranier Vilela
Дата:
Сообщение: [PATCH] fix size sum table_parallelscan_estimate (src/backend/access/table/tableam.c)