Обсуждение: Happy Anniversary

Поиск
Список
Период
Сортировка

Happy Anniversary

От
Peter Eisentraut
Дата:
I suppose few people have remembered that today is what could be
considered the 5th anniversary of the PostgreSQL project.  Cheers for
another five years!


http://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html

-- 
Peter Eisentraut   peter_e@gmx.net   http://funkturm.homeip.net/~peter



Re: Happy Anniversary

От
Bruce Momjian
Дата:
> I suppose few people have remembered that today is what could be
> considered the 5th anniversary of the PostgreSQL project.  Cheers for
> another five years!
> 
> 
> http://www.ca.postgresql.org/mhonarc/pgsql-hackers/1999-10/msg00552.html

Good catch!  Yes, you are right.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Postgresql bulk fast loader

От
Naomi Walker
Дата:
Does postgresql have any sort of fast bulk loader?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100  ext 242 



Re: Postgresql bulk fast loader

От
mlw
Дата:
Naomi Walker wrote:
> 
> Does postgresql have any sort of fast bulk loader?

It has a very cool SQL extension called COPY. Super fast.

Command:     COPY
Description: Copies data between files and tables
Syntax:
COPY [ BINARY ] table [ WITH OIDS ]   FROM { 'filename' | stdin }   [ [USING] DELIMITERS 'delimiter' ]   [ WITH NULL AS
'nullstring' ]
 
COPY [ BINARY ] table [ WITH OIDS ]   TO { 'filename' | stdout }   [ [USING] DELIMITERS 'delimiter' ]   [ WITH NULL AS
'nullstring' ]
 


Re: Postgresql bulk fast loader

От
Bruce Momjian
Дата:
> Does postgresql have any sort of fast bulk loader?

COPY command.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Postgresql bulk fast loader

От
Mark Volpe
Дата:
Avoid doing this with indexes on the table, though. I learned the hard way!

Mark

mlw wrote:
> 
> Naomi Walker wrote:
> >
> > Does postgresql have any sort of fast bulk loader?
> 
> It has a very cool SQL extension called COPY. Super fast.
> 
> Command:     COPY
> Description: Copies data between files and tables
> Syntax:
> COPY [ BINARY ] table [ WITH OIDS ]
>     FROM { 'filename' | stdin }
>     [ [USING] DELIMITERS 'delimiter' ]
>     [ WITH NULL AS 'null string' ]
> COPY [ BINARY ] table [ WITH OIDS ]
>     TO { 'filename' | stdout }
>     [ [USING] DELIMITERS 'delimiter' ]
>     [ WITH NULL AS 'null string' ]
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


Re: Re: Postgresql bulk fast loader

От
Guy Fraser
Дата:
Mark Volpe wrote:
> 
> Avoid doing this with indexes on the table, though. I learned the hard way!
> 
> Mark
> 
> mlw wrote:
> >
> > Naomi Walker wrote:
> > >
> > > Does postgresql have any sort of fast bulk loader?
> >
> > It has a very cool SQL extension called COPY. Super fast.
> >
> > Command:     COPY
> > Description: Copies data between files and tables
> > Syntax:
> > COPY [ BINARY ] table [ WITH OIDS ]
> >     FROM { 'filename' | stdin }
> >     [ [USING] DELIMITERS 'delimiter' ]
> >     [ WITH NULL AS 'null string' ]
> > COPY [ BINARY ] table [ WITH OIDS ]
> >     TO { 'filename' | stdout }
> >     [ [USING] DELIMITERS 'delimiter' ]
> >     [ WITH NULL AS 'null string' ]
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

Hi

On a daily basis I have an automated procedure that that bulk copies
information into a "holding" table. I scan for duplicates and put the
OID for the first unique record into a temporary table. Using the OID
and other information I do an INSERT with SELECT to move the unique
data into its appropriate table. Then I remove the unique records and
move the duplicates into a debugging table. After that I remove the
remaining records and drop the temporary tables. Once this is done I
vacuum the tables and regenerate the indexes.

This sounds complicated but by doing things in quick simple transactions
the database is able to run continuously without disruption. I am able
to import 30+ MB of data every day with only a small disruption when
updating the the summary tables.

Guy Fraser

-- 
There is a fine line between genius and lunacy, fear not, walk the
line with pride. Not all things will end up as you wanted, but you
will certainly discover things the meek and timid will miss out on.