Re: speed up full table scan using psql

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: speed up full table scan using psql
Дата
Msg-id 8bee33fc-9b8d-94d9-6fe3-256db822f75f@aklaver.com
обсуждение исходный текст
Ответ на Re: speed up full table scan using psql  (Lian Jiang <jiangok2006@gmail.com>)
Список pgsql-general
On 5/31/23 22:51, Lian Jiang wrote:
> The whole command is:
> 
> psql %(pg_uri)s -c %(sql)s | %(sed)s | %(pv)s | %(split)s) 2>&1 | %(tr)s
> 
> where:
> sql is "copy (select row_to_json(x_tmp_uniq) from public.mytable 
> x_tmp_uniq) to stdout"
> sed, pv, split, tr together format and split the stdout into jsonl files.

Well that is quite the pipeline. At this point I think you need to do 
some testing on your end. First create a table that is a subset of the 
original data to make testing a little quicker.  Then break the process 
down into smaller actions. Start with just doing a COPY direct to CSV 
and one with the row_to_json to see if that makes a difference. Then 
COPY directly to a file before applying the above pipeline. There are 
more ways you can slice this depending on what the preceding shows you.

> 
> Hope this helps.

-- 
Adrian Klaver
adrian.klaver@aklaver.com




В списке pgsql-general по дате отправления:

Предыдущее
От: Jim Vanns
Дата:
Сообщение: Re: Dynamic creation of list partitions in highly concurrent write environment
Следующее
От: "Wen Yi"
Дата:
Сообщение: [Beginner Question]A question about yacc & lex