Обсуждение: pg_dump --compress error

Поиск
Список
Период
Сортировка

pg_dump --compress error

От
yasin malli
Дата:
Hi everyone.

I try this command ' pg_dump --compress=5 DBNAME > ***.sql ' and ' psql -f ***.sql -d DBNAME '
but I take some error because of compression. how can restore compressed dump file without taking any error ?

thanks in advance 

--
Yasin MALLI
Sistem ve Yazılım Geliştirme Mühendisi /
System & Software Development Engineer

yasin@labristeknoloji.com , yasinmalli@gmail.com , yasinmalli@yahoo.com

Re: pg_dump --compress error

От
Alban Hertroys
Дата:
On 31 Aug 2010, at 6:44, yasin malli wrote:

> Hi everyone.
>
> I try this command ' pg_dump --compress=5 DBNAME > ***.sql ' and ' psql -f ***.sql -d DBNAME '
> but I take some error because of compression. how can restore compressed dump file without taking any error ?

By using pg_restore instead of psql.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4c7c9a6510401193214009!



Re: pg_dump --compress error

От
Alban Hertroys
Дата:
On 31 Aug 2010, at 8:17, yasin malli wrote:

Don't reply to just me, include the list.

> if I took my dump file with 'pg_dump -Ft ' command, I would use 'pg_restore', but I take my dump file at plain-old
formatfor compressing data ( tar format dump hasn't compress feature ) 
> when I tried your suggestion, I take this error : pg_restore: [archiver] input file does not appear to be a valid
archive 

Ah right, most people use --compress in combination with the custom format (-Fc).

> I have little space on my device so I have to compress db files.
> For example; when I took dump_file with 'pg_dump -Ft'  dump_files size : 56K
>                                                            'pg_dump --compress=5'          : 4K

Try pg_dump -Fc --compress=5, I think you'll reach comparable sizes and you'll get much more flexibility to restore
yourdatabase. 
Shouldn't you be using level 9 btw, if you're worried about disk space?

> I can take a dump_file but I can't restore it. Is there any other way to restore compressed data ?

Didn't you read the man page for the --compress option? You can just pipe your dump through gunzip.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4c7ca37210401517469623!



Re: pg_dump --compress error

От
yasin malli
Дата:
I tried it and it ran without any error but my table wasn't created so problem is going on.
compress level isn't important because when I controlled it gave me same results ( 5 or 9 )

Unfortunately, only plain-old dump works correctly while restoring.
if command contains any compress option, it won't work

any suggestion ?

--
Yasin MALLI
System & Software Development Engineer
yasinmalli@gmail.com , yasinmalli@yahoo.com


On Mon, Aug 30, 2010 at 11:38 PM, Alban Hertroys <dalroi@solfertje.student.utwente.nl> wrote:
On 31 Aug 2010, at 8:17, yasin malli wrote:

Don't reply to just me, include the list.

> if I took my dump file with 'pg_dump -Ft ' command, I would use 'pg_restore', but I take my dump file at plain-old format for compressing data ( tar format dump hasn't compress feature )
> when I tried your suggestion, I take this error : pg_restore: [archiver] input file does not appear to be a valid archive

Ah right, most people use --compress in combination with the custom format (-Fc).

> I have little space on my device so I have to compress db files.
> For example; when I took dump_file with 'pg_dump -Ft'  dump_files size : 56K
>                                                            'pg_dump --compress=5'          : 4K

Try pg_dump -Fc --compress=5, I think you'll reach comparable sizes and you'll get much more flexibility to restore your database.
Shouldn't you be using level 9 btw, if you're worried about disk space?

> I can take a dump_file but I can't restore it. Is there any other way to restore compressed data ?

Didn't you read the man page for the --compress option? You can just pipe your dump through gunzip.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:1164,4c7ca36210403062783909!



Re: pg_dump --compress error

От
Tom Lane
Дата:
yasin malli <yasinmalli@gmail.com> writes:
> Unfortunately, only plain-old dump works correctly while restoring.
> if command contains any compress option, it won't work

--compress is normally used as an adjunct to -Fc.
I'm not real sure what you get if you specify it without that;
maybe a compressed plain-text-script dump?  If so, you'd have
to pass it through gunzip and then to psql to do anything useful
with it.

            regards, tom lane

Re: pg_dump --compress error

От
"Joshua D. Drake"
Дата:
On Tue, 2010-08-31 at 00:50 -0700, yasin malli wrote:
> I tried it and it ran without any error but my table wasn't created so
> problem is going on.
> compress level isn't important because when I controlled it gave me
> same results ( 5 or 9 )
>
> Unfortunately, only plain-old dump works correctly while restoring.
> if command contains any compress option, it won't work

Compress will not work in plaintext format for restore. You need to use
-Fc and then pg_restore.

Joshua D. Drake

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt

Re: pg_dump --compress error

От
Alvaro Herrera
Дата:
Excerpts from yasin malli's message of mar ago 31 00:44:36 -0400 2010:
> Hi everyone.
>
> I try this command ' pg_dump --compress=5 DBNAME > ***.sql ' and ' psql -f
> ***.sql -d DBNAME '
> but I take some error because of compression. how can restore compressed
> dump file without taking any error ?

You can restore this with

zcat ***.sql | psql -d DBNAME

(or "gunzip -c ***.sql" if you don't have zcat)

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: pg_dump --compress error

От
"Joshua D. Drake"
Дата:
On Tue, 2010-08-31 at 00:50 -0700, yasin malli wrote:
> I tried it and it ran without any error but my table wasn't created so
> problem is going on.
> compress level isn't important because when I controlled it gave me
> same results ( 5 or 9 )
>
> Unfortunately, only plain-old dump works correctly while restoring.
> if command contains any compress option, it won't work

Compress will not work in plaintext format for restore. You need to use
-Fc and then pg_restore.

Joshua D. Drake

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


optimization (can I move pgsql_tmp)?

От
Ozz Nixon
Дата:
Good after everyone,

    We are experiencing some performance issues on a table with 7 fields, 8,800,000 rows. During some exercises, one
thingI noticed is I need to change the configuration of the system to house pgsql_tmp on a host partition instead of
thedrive array... that will get me a little more speed... is this controlled via a .conf file or pgamin? 

    Optimization questions:

    When is pgsql_tmp used? (Right now as I am creating indexes by hand, I see it grow for about 10 minutes):

du -h /mnt/data/base/
5.1M    /mnt/data/base/1
5.1M    /mnt/data/base/11563
4.0G    /mnt/data/base/11564
8.9M    /mnt/data/base/16395
586M    /mnt/data/base/pgsql_tmp

    During the create index - communications in general to the drive array is "consumed".

    Before I keep experimenting and making things worse, I will ask - what indexes should I have to make this query
better- or how does on debug when they find a query is taking too long??? 

STRUCTURE:
   quer.SQL.Add('create table '+DBTags+' (');
   quer.SQL.Add('   pagename '+SQL_TITLE+'(100) not null,');
   quer.SQL.Add('   tagword '+SQL_TITLE+'(15) not null,');
   quer.SQL.Add('   soundex2 '+SQL_TITLE+'(4) not null,');
   quer.SQL.Add('   metaphone '+SQL_TITLE+'(15) not null,');
   quer.SQL.Add('   metaphone2 '+SQL_TITLE+'(22) not null,');
   quer.SQL.Add('   carverphone '+SQL_TITLE+'(22) not null,');
   quer.SQL.Add('   instances '+SQL_INT32+' not null,');
   if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add('   constraint '+DBTags+'_PK');
   quer.SQL.Add('   primary key(pagename, tagword, instances)');


WAS 18 seconds with just the primary key, so I tried:
create index tags1 on allwikitags(tagword) -- after still 18 seconds

Then I tried:
create index tags6 on allwikitags(tagword,instances desc, pagename) -- after now 32 seconds


My Query:

select pagename,tagword,instances from allwikitags
where tagword in
('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
or soundex2 in ('J650')
or metaphone in ('jrny')
or metaphone2 in ('JRN')
group by pagename,tagword,instances
order by tagword,instances desc,pagename

Thanks,
Ozz

Re: optimization (can I move pgsql_tmp)?

От
Ozz Nixon
Дата:
-- create index tags1 on allwikitags(tagword) - after still 18 seconds
-- create index tags6 on allwikitags(tagword,instances desc, pagename) - after now 32 seconds
-- drop index tags6
-- create index tags2 on allwikitags(instances) - after still 18 seconds
-- create index tags3 on allwikitags(soundex2)
-- create index tags4 on allwikitags(metaphone)
-- create index tags5 on allwikitags(metaphone2) - after 3 now 1 second

select pagename,tagword,instances from allwikitags
where tagword in
('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
or soundex2 in ('J650')
or metaphone in ('jrny')
or metaphone2 in ('JRN')
group by pagename,tagword,instances
order by tagword,instances desc,pagename

    Giving it indexes for each of the "or" elements got the 8.8 million row query down to 1 second. So now, I just
awaitfor suggestions of how one would debug this and know he needed to hang more indexes off the table? 

Thanks,
Ozz
On Oct 5, 2010, at 3:02 PM, Ozz Nixon wrote:

> Good after everyone,
>
>     We are experiencing some performance issues on a table with 7 fields, 8,800,000 rows. During some exercises, one
thingI noticed is I need to change the configuration of the system to house pgsql_tmp on a host partition instead of
thedrive array... that will get me a little more speed... is this controlled via a .conf file or pgamin? 
>
>     Optimization questions:
>
>     When is pgsql_tmp used? (Right now as I am creating indexes by hand, I see it grow for about 10 minutes):
>
> du -h /mnt/data/base/
> 5.1M    /mnt/data/base/1
> 5.1M    /mnt/data/base/11563
> 4.0G    /mnt/data/base/11564
> 8.9M    /mnt/data/base/16395
> 586M    /mnt/data/base/pgsql_tmp
>
>     During the create index - communications in general to the drive array is "consumed".
>
>     Before I keep experimenting and making things worse, I will ask - what indexes should I have to make this query
better- or how does on debug when they find a query is taking too long??? 
>
> STRUCTURE:
>   quer.SQL.Add('create table '+DBTags+' (');
>   quer.SQL.Add('   pagename '+SQL_TITLE+'(100) not null,');
>   quer.SQL.Add('   tagword '+SQL_TITLE+'(15) not null,');
>   quer.SQL.Add('   soundex2 '+SQL_TITLE+'(4) not null,');
>   quer.SQL.Add('   metaphone '+SQL_TITLE+'(15) not null,');
>   quer.SQL.Add('   metaphone2 '+SQL_TITLE+'(22) not null,');
>   quer.SQL.Add('   carverphone '+SQL_TITLE+'(22) not null,');
>   quer.SQL.Add('   instances '+SQL_INT32+' not null,');
>   if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add('   constraint '+DBTags+'_PK');
>   quer.SQL.Add('   primary key(pagename, tagword, instances)');
>
>
> WAS 18 seconds with just the primary key, so I tried:
> create index tags1 on allwikitags(tagword) -- after still 18 seconds
>
> Then I tried:
> create index tags6 on allwikitags(tagword,instances desc, pagename) -- after now 32 seconds
>
>
> My Query:
>
> select pagename,tagword,instances from allwikitags
> where tagword in
('journey','journeys','journeyes','journeyd','journeyed','journeyly','journeyy','journeyth','journeydom','journeying','journeyize','journeyion','journeyism','journeyized','journeyizes','journeyizing','journeyions','journeyists','journeyfulness','journeyise','journeyish','journeyist','journeyful','journeytion','journeyless','journeyable','journeyfull','journeyious','journeylike','journeyment','journeyness')
> or soundex2 in ('J650')
> or metaphone in ('jrny')
> or metaphone2 in ('JRN')
> group by pagename,tagword,instances
> order by tagword,instances desc,pagename
>
> Thanks,
> Ozz


Re: optimization (can I move pgsql_tmp)?

От
bricklen
Дата:
On Tue, Oct 5, 2010 at 12:02 PM, Ozz Nixon <ozznixon@gmail.com> wrote:
> Good after everyone,
>
>        We are experiencing some performance issues on a table with 7 fields, 8,800,000 rows. During some exercises,
onething I noticed is I need to change the configuration of the system to house pgsql_tmp on a host partition instead
ofthe drive array... that will get me a little more speed... is this controlled via a .conf file or pgamin? 
>
You can try setting temp_tablespaces in your postgreqsl.conf file to
use a tablespace defined over another part of your disk subsytem.
Eg. create tablespace some_tablespace location '/some/path';

temp_tablespaces = 'some_tablespace';

Then reload your postgresql.conf file.  Not sure if it's going to help
you or not, though.