Обсуждение: full table...

Поиск
Список
Период
Сортировка

full table...

От
Felson
Дата:
I have a table that stores a HUGE volume of data every
day. I am now running into a problem where when I try
to insert data, the remote connection times out
because it takes to long... (1 minute)

The solution I was going to try, is to break up the
table it to one per upload site. (tablename_siteid) as
the table name. This could result in there being
posably 1000 or more tables in about a years time. Can
this cause me any problems? If so, what would be a
better answer?

current structer:

Table "channeldata"
 Attribute  |   Type    |      Modifier

------------+-----------+-----------------------
 id         | integer   | not null default nextval
 cd_id      | integer   | default 0
 s_id       | integer   | default 0
 units      | smallint  | default 0
 datareal   | float8    |
 dataalt    | float8    |
 dataoffset | float8    | default 0
 tstamp     | timestamp | default now()


proposed structer:
Table "channeldata_[s_id]"
 Attribute  |   Type    |      Modifier

------------+-----------+-----------------------
 id         | integer   | not null default nextval
 cd_id      | integer   | default 0
 units      | smallint  | default 0
 datareal   | float8    |
 dataalt    | float8    |
 dataoffset | float8    | default 0
 tstamp     | timestamp | default now()



__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com

Re: full table...

От
Tom Lane
Дата:
Felson <felson123@yahoo.com> writes:
> I have a table that stores a HUGE volume of data every
> day. I am now running into a problem where when I try
> to insert data, the remote connection times out
> because it takes to long... (1 minute)

How much is HUGE?  I'm having a really hard time believing that a simple
insert could take > 1min regardless of table size ... are there perhaps
triggers or rules or foreign-key references on this table that could be
eating the time?

            regards, tom lane

Re: full table...

От
Felson
Дата:
Acutely, there are 7 inserts that that take place on
that table before it can talk to the unit that is
broadcasting to me again...
There is a unique constraint on (tstamp, cd_id) but
removing it didn't fix the speed issue...
I am at about 3,000,000 rows give or take a few
thousand. My first take is that I agree with you in
that 3 mill rows should not be an issue at insert
time, but at this point I have no clue on what else it
can be... The rest of the database responds just fine,
all except this table. I also had done a VACUUM
ANALYZE on the table in hopes that it would help...

On my original fix, is there any disadvantage to that
many tables? Other than \d becomes almost useless?

--- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Felson <felson123@yahoo.com> writes:
> > I have a table that stores a HUGE volume of data
> every
> > day. I am now running into a problem where when I
> try
> > to insert data, the remote connection times out
> > because it takes to long... (1 minute)
>
> How much is HUGE?  I'm having a really hard time
> believing that a simple
> insert could take > 1min regardless of table size
> ... are there perhaps
> triggers or rules or foreign-key references on this
> table that could be
> eating the time?
>
>             regards, tom lane


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com

Re: full table...

От
Tom Lane
Дата:
Felson <felson123@yahoo.com> writes:
> Acutely, there are 7 inserts that that take place on
> that table before it can talk to the unit that is
> broadcasting to me again...
> There is a unique constraint on (tstamp, cd_id) but
> removing it didn't fix the speed issue...
> I am at about 3,000,000 rows give or take a few
> thousand.

Three million rows is not "huge", it's more in the
barely-enough-to-notice class.  There is no way that seven inserts should
take even a second.  I've got to think that you're firing triggers or
rules that are chewing the time.

Which version did you say you are using?  7.2's psql will list triggers
and rules when you say "\d table", but I'm not sure earlier releases will.

            regards, tom lane

Re: full table...

От
Felson
Дата:
I am on 7.0 I am plaing an upgade at the end of the
week as I am getting a new server to put it on. As for
rules, there are a number of indexs, and 2 forign key
rules.
here are all the indexes, and triggers on the table as
per pg_dump -Dst

CREATE UNIQUE INDEX "channeldata_id_key" on
"channeldata" using btree ( "id" "int4_ops" );
CREATE UNIQUE INDEX "channeldata_unique_ts_cd_id_key"
on "channeldata" using btree ( "cd_id" "int4_ops",
"tstamp" "timestamp_ops" );
CREATE  INDEX "channeldata_cd_id_index" on
"channeldata" using btree ( "cd_id" "int4_ops" );
CREATE  INDEX "channeldata_tstamp_index" on
"channeldata" using btree ( "tstamp" "timestamp_ops"
);
CREATE  INDEX "channeldata_s_id_index" on
"channeldata" using btree ( "s_id" "int4_ops" );
CREATE CONSTRAINT TRIGGER "channeldata_cd_id_key"
AFTER INSERT OR UPDATE ON "channeldata"  NOT
DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
PROCEDURE "RI_FKey_check_ins"
('channeldata_cd_id_key', 'channeldata', 'channel',
'UNSPECIFIED', 'cd_id', 'id');
CREATE CONSTRAINT TRIGGER "channeldata_s_id_key" AFTER
INSERT OR UPDATE ON "channeldata"  NOT DEFERRABLE
INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE
"RI_FKey_check_ins" ('channeldata_s_id_key',
'channeldata', 'site', 'UNSPECIFIED', 's_id', 'id');

I tried to drop the 2 triggers, but it told me they
didn't exist... If I select on the pd_triggers I can
see them in there though...

I gess one other point that I should mention, is taht
the computer is 500 MhZ 256MB RAM and an IDE hard
drive... I will have a real server this week or next.
But I need to be colecting that data in the
meantime...

--- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Felson <felson123@yahoo.com> writes:
> > Acutely, there are 7 inserts that that take place
> on
> > that table before it can talk to the unit that is
> > broadcasting to me again...
> > There is a unique constraint on (tstamp, cd_id)
> but
> > removing it didn't fix the speed issue...
> > I am at about 3,000,000 rows give or take a few
> > thousand.
>
> Three million rows is not "huge", it's more in the
> barely-enough-to-notice class.  There is no way that
> seven inserts should
> take even a second.  I've got to think that you're
> firing triggers or
> rules that are chewing the time.
>
> Which version did you say you are using?  7.2's psql
> will list triggers
> and rules when you say "\d table", but I'm not sure
> earlier releases will.
>
>             regards, tom lane
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com

Re: full table...

От
Tom Lane
Дата:
Felson <felson123@yahoo.com> writes:
> I am on 7.0 I am plaing an upgade at the end of the
> week as I am getting a new server to put it on. As for
> rules, there are a number of indexs, and 2 forign key
> rules.

Ah-hah.  I bet your problem is the foreign key rules.  Make sure you
have indexes on both ends of each foreign-key reference (ie, both the
referencing and referenced columns need indexes).  Also, vacuum analyze
both tables.  Then start a fresh backend and see if it's any better.
(You need a fresh backend because the plans for foreign-key check
queries are cached.)

            regards, tom lane

Re: full table...

От
Felson
Дата:
Thanks for all your help Tom.
Turns out this thread really did belong on the
"novice" list... It turns out that I hadent vacumed
the to tables that this one was pointing at...
After I vaccumed the other 2 tables, I get about one 7
insert sample every 2-3 seconds, and that includes the
time it takes the unit to send me the data over
CDPD... Man am I glad I posted before I tried my fix!
That would have been a pain in the !@*&^ to
administer...

--- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Felson <felson123@yahoo.com> writes:
> > I am on 7.0 I am plaing an upgade at the end of
> the
> > week as I am getting a new server to put it on. As
> for
> > rules, there are a number of indexs, and 2 forign
> key
> > rules.
>
> Ah-hah.  I bet your problem is the foreign key
> rules.  Make sure you
> have indexes on both ends of each foreign-key
> reference (ie, both the
> referencing and referenced columns need indexes).
> Also, vacuum analyze
> both tables.  Then start a fresh backend and see if
> it's any better.
> (You need a fresh backend because the plans for
> foreign-key check
> queries are cached.)
>
>             regards, tom lane
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 2: you can get off all lists at once with the
> unregister command
>     (send "unregister YourEmailAddressHere" to
majordomo@postgresql.org)


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com