Re: COPY from STDIN vs file with large CSVs
| От | Ron |
|---|---|
| Тема | Re: COPY from STDIN vs file with large CSVs |
| Дата | |
| Msg-id | c2e587ae-d58d-8d21-0a21-78c323867d08@gmail.com обсуждение исходный текст |
| Ответ на | COPY from STDIN vs file with large CSVs (Wells Oliver <wells.oliver@gmail.com>) |
| Ответы |
Re: COPY from STDIN vs file with large CSVs
|
| Список | pgsql-admin |
On 1/8/20 10:54 AM, Wells Oliver wrote: > I have a CSV that's ~30GB. Some 400m rows. Would there be a meaningful > performance difference to run COPY from STDIN using: cat f.csv | psql > "COPY .. FROM STDIN WITH CSV" versus just doing "COPY ... FROM 'f.csv' > WITH CSV"? > > Thanks. It took about four hours to copy one and I felt that was a little > much. catting the file starts another process, and opens a pipe. That can't be faster than "COPY ... FROM ... WITH CSV". pg_bulkload (which might be in your repository) is probably what you really want. -- Angular momentum makes the world go 'round.
В списке pgsql-admin по дате отправления: