2010/11/1 Divakar Singh <dpsmails@yahoo.com>:
> Hi,
> I am trying to tune my libpq program for insert performance.
> When I tried inserting 1M rows into a table with a Primary Key, it took
> almost 62 seconds.
> After adding a composite index of 2 columns, the performance degrades to 125
> seconds.
> I am using COPY to insert all data in 1 transaction.
>
> the table definition is
>
> CREATE TABLE ABC
> (
> event integer,
> innodeid character varying(80),
> innodename character varying(80),
> sourceid character varying(300),
> intime timestamp(3) without time zone,
> outnodeid character varying(80),
> outnodename character varying(80),
> destinationid character varying(300),
> outtime timestamp(3) without time zone,
> bytes integer,
> cdrs integer,
> tableindex integer NOT NULL,
> noofsubfilesinfile integer,
> recordsequenceintegerlist character varying(1000),
> CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
> )
>
> the index definition is
>
>
> CREATE INDEX "PK_AT2"
> ON ABC
> USING btree
> (event, tableindex)
> TABLESPACE sample;
Indexing twice the same column is useless. (perhaps move your PK to
the tablespace 'sample' is good too ?)
>
> Any tip to increase the insert performance in this case?
If you create or truncate table then copy to it, you should create
index after the copy order.
>
> It would also be helpful if someone can send comprehensive libpq programming
> guide for PG 9.x. Online doc of libpq is not much helpful for a newbie like
> me.
>
>
> Best Regards,
> Divakar
>
>
--
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support