Обсуждение: select * and save into a text file failed

Поиск
Список
Период
Сортировка

select * and save into a text file failed

От
"Lee Wu"
Дата:

Hi,

 

When I try to save a query result into a text file from “select * from a_table”, once I got:

out of memory for query result

under psql

 

Once got killed:

My_db=# select * from a_table;

Killed

bash-2.05b$

 

And once it caused PG crashed.

 

My PG version is:

select version();

                                                 version

----------------------------------------------------------------------------------------------------------

 PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-49)

(1 row)

 

The table has 6287585 rows, and takes 2805000 bytes.

 

From the PG log, I can only find:

could not send data to client: Connection reset by peer

could not send data to client: Broken pipe

unexpected EOF on client connection

 

How can I save PG data into text file without using pg_dump?

 

Thanks,

 

PS: I know pg_dump, but it is not my option.

Re: select * and save into a text file failed

От
Bricklen Anderson
Дата:
Lee Wu wrote:
 >
> How can I save PG data into text file without using pg_dump?
>
did you try

select * from table \o '/var/tmp/textfile.txt'
;


--
_______________________________

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
_______________________________

Re: select * and save into a text file failed

От
James Herbers
Дата:
 From a client machine try the program Aqua Data Studio
(http://aquafold.com/) it has a nice export feature to export to text files.

James Herbers

Lee Wu wrote:

> Hi,
>
> When I try to save a query result into a text file from “select * from
> a_table”, once I got:
>
> out of memory for query result
>
> under psql
>
> Once got killed:
>
> My_db=# select * from a_table;
>
> Killed
>
> bash-2.05b$
>
> And once it caused PG crashed.
>
> My PG version is:
>
> select version();
>
> version
>
> ----------------------------------------------------------------------------------------------------------
>
> PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3
> 20030502 (Red Hat Linux 3.2.3-49)
>
> (1 row)
>
> The table has 6287585 rows, and takes 2805000 bytes.
>
> From the PG log, I can only find:
>
> could not send data to client: Connection reset by peer
>
> could not send data to client: Broken pipe
>
> unexpected EOF on client connection
>
> How can I save PG data into text file without using pg_dump?
>
> Thanks,
>
> PS: I know pg_dump, but it is not my option.
>


Re: select * and save into a text file failed

От
"Lee Wu"
Дата:
Same:

My_db=# select * from a_table \o 'test2.dat'
My_db-# ;
Killed

-----Original Message-----
From: Bricklen Anderson [mailto:BAnderson@PresiNET.com]
Sent: Friday, June 10, 2005 9:35 AM
To: Lee Wu
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] select * and save into a text file failed

Lee Wu wrote:
 >
> How can I save PG data into text file without using pg_dump?
>
did you try

select * from table \o '/var/tmp/textfile.txt'
;


--
_______________________________

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
_______________________________

Re: select * and save into a text file failed

От
"Lee Wu"
Дата:

Even without saving to file, it is still killed:

 

Welcome to psql 8.0.3, the PostgreSQL interactive terminal.

 

Type:  \copyright for distribution terms

       \h for help with SQL commands

       \? for help with psql commands

       \g or terminate with semicolon to execute query

       \q to quit

 

My_db=# select * from a_table;

Killed

bash-2.05b$

 

PG log shows:

could not send data to client: Connection reset by peer

could not send data to client: Broken pipe

unexpected EOF on client connection

 

Can anyone help?

 

Thanks,

 

PS: psql launched from the same PG server box.


From: Lee Wu
Sent: Friday, June 10, 2005 9:25 AM
To: pgsql-admin@postgresql.org
Subject: select * and save into a text file failed

 

Hi,

 

When I try to save a query result into a text file from “select * from a_table”, once I got:

out of memory for query result

under psql

 

Once got killed:

My_db=# select * from a_table;

Killed

bash-2.05b$

 

And once it caused PG crashed.

 

My PG version is:

select version();

                                                 version

----------------------------------------------------------------------------------------------------------

 PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-49)

(1 row)

 

The table has 6287585 rows, and takes 2805000 bytes.

 

From the PG log, I can only find:

could not send data to client: Connection reset by peer

could not send data to client: Broken pipe

unexpected EOF on client connection

 

How can I save PG data into text file without using pg_dump?

 

Thanks,

 

PS: I know pg_dump, but it is not my option.

Re: select * and save into a text file failed

От
Tom Lane
Дата:
"Lee Wu" <Lwu@mxlogic.com> writes:
> My_db=3D# select * from a_table;
> Killed

Apparently the table is bigger than psql can hold in memory?

The simplest way to do this is certainly to use pg_dump in one-table
mode.  If you're intent on using psql, you could use a cursor or
OFFSET/LIMIT to fetch part of the table at a time.

            regards, tom lane

Re: select * and save into a text file failed

От
Steve Crawford
Дата:
On Friday 10 June 2005 9:33 am, Lee Wu wrote:
> Even without saving to file, it is still killed:
>...
> My_db=# select * from a_table;
> Killed
>...

The previous examples don't work for me. In psql try this:
--First set the output to a file
\o 'my_output.txt'

--Now run the query
select * from myfile;

--Quit and check your results
\q

If you have a very large table you can exhaust memory on the client
side unless you are writing the data directly to a file.

Cheers,
Steve


Re: select * and save into a text file failed

От
"Lee Wu"
Дата:
That is exactly what I did:

\o a_lot_room_to_hold_my_result
select * from a_table

either
1. out of memory for query result
2. killed
3. crash PG

"If you have a very large table you can exhaust memory on the client
side unless you are writing the data directly to a file."
How besides "\o" and pg_dump?

We have 4G RAM, and shared_buffers= 32768, it is a dedicate test box,
while the table is about 2G.

Thanks,

-----Original Message-----
From: Steve Crawford [mailto:scrawford@pinpointresearch.com]
Sent: Friday, June 10, 2005 11:00 AM
To: Lee Wu; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] select * and save into a text file failed

On Friday 10 June 2005 9:33 am, Lee Wu wrote:
> Even without saving to file, it is still killed:
>...
> My_db=# select * from a_table;
> Killed
>...

The previous examples don't work for me. In psql try this:
--First set the output to a file
\o 'my_output.txt'

--Now run the query
select * from myfile;

--Quit and check your results
\q

If you have a very large table you can exhaust memory on the client
side unless you are writing the data directly to a file.

Cheers,
Steve


Re: select * and save into a text file failed

От
"Lee Wu"
Дата:
Which memory are you refer Tom?
shared_buffers= 32768
and RAM is 4G while the table is about 2G.

Why straight "select * from" need so big memory?
From top, I did not see the process eating a lot of memory.
From vmstat, I did not see any swaping.

It is a dedicate test box, with only PG running.

Thanks Tom,

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Friday, June 10, 2005 10:55 AM
To: Lee Wu
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] select * and save into a text file failed

"Lee Wu" <Lwu@mxlogic.com> writes:
> My_db=3D# select * from a_table;
> Killed

Apparently the table is bigger than psql can hold in memory?

The simplest way to do this is certainly to use pg_dump in one-table
mode.  If you're intent on using psql, you could use a cursor or
OFFSET/LIMIT to fetch part of the table at a time.

            regards, tom lane

Re: select * and save into a text file failed

От
Tom Lane
Дата:
"Lee Wu" <Lwu@mxlogic.com> writes:
> either
> 1. out of memory for query result
> 2. killed
> 3. crash PG

Cases 2 and 3 represent the kernel out-of-memory code going nuts.
Google for "OOM kill" and "memory overcommit".  IMHO you never
want Linux's default memory overcommit behavior on a production
server.

            regards, tom lane

Re: select * and save into a text file failed

От
Steve Crawford
Дата:
On Friday 10 June 2005 10:03 am, Lee Wu wrote:
> That is exactly what I did:
>
> \o a_lot_room_to_hold_my_result
> select * from a_table
>
> either
> 1. out of memory for query result
> 2. killed
> 3. crash PG
>
> "If you have a very large table you can exhaust memory on the
> client side unless you are writing the data directly to a file."
> How besides "\o" and pg_dump?
>
> We have 4G RAM, and shared_buffers= 32768, it is a dedicate test
> box, while the table is about 2G.

Something to try (I don't know if it will work because I don't know
the exact internals of the PG libraries but it's something I've
observed): set the output formatting to unaligned (\pset format
unaligned).

I had a situation once where selecting a few thousand rows exhausted
my RAM. The problem was that one item in a text column was ~8k long
which meant that every other row had 8k of padding. And this was the
case for more than one column so the pager was having to swallow gobs
of data most of which was blank space. I ran the output to a file and
ran some tests. Switching to unaligned output dropped the size by
orders of magnitude. Depending on the nature of your data this may
help or do nothing at all.

Of course you are going to hit RAM or disk limitations on any given
machine. Cursors are there for your use and your project may require
them.

Cheers,
Steve