Обсуждение: Cluster Database

Поиск
Список
Период
Сортировка

Cluster Database

От
"Al-Karim Bhamani (LCL)"
Дата:
Hi


Can Postgresql run in an clustered environment
as Oracle runs as parallel server or RAC in 9i.

Thanks.

Query performanc issue - too many table?

От
"Marc Mitchell"
Дата:
I am having a problem with the below SQL taking an extreme amount of time
to run.  The problem is that the explain looks great with all index scans.
But the query itself takes minutes to run.  The query contains 11 tables.
We've found that by dropping any one table, performance reverts to being
nearly instantaneous.

Here is my SQL:

SELECT ..... (20 or so simple columns from all tables)
FROM
    trip TRP,
    power_unit PWU,
    driver DRI,
    trailer TRL,
    trip_stop STP,
    transloading TXL,
    freight_group FGP,
    customer_order ORD,
    place PLC1,
    geo_location LOC1,
    place PLC2
WHERE
    TRP.trp_uid = '51972' AND
    TRP.pwu_uid=PWU.pwu_uid AND
    TRP.main_dri_uid=DRI.dri_uid AND
    TRP.trl_uid=TRL.trl_uid AND
    TRP.trp_uid=STP.trp_uid AND
    STP.stp_uid=TXL.stp_uid AND
    TXL.fgp_uid=FGP.fgp_uid AND
    FGP.ord_uid=ORD.ord_uid AND
    FGP.ship_plc_uid = PLC1.plc_uid AND
    PLC1.loc_uid = LOC1.loc_uid AND
    FGP.cons_plc_uid = PLC2.plc_uid;

Here is the EXPLAIN:

Nested Loop  (cost=0.00..1404.92 rows=8 width=552)
  ->  Nested Loop  (cost=0.00..1381.54 rows=8 width=548)
        ->  Nested Loop  (cost=0.00..1351.34 rows=8 width=490)
              ->  Nested Loop  (cost=0.00..1322.79 rows=8 width=486)
                    ->  Nested Loop  (cost=0.00..1292.60 rows=8 width=424)
                          ->  Nested Loop  (cost=0.00..1256.01 rows=12
width=342)
                                ->  Nested Loop  (cost=0.00..18.06 rows=5
width=314)
                                      ->  Nested Loop  (cost=0.00..14.74
rows=1 width=306)
                                            ->  Nested Loop
(cost=0.00..10.76 rows=1 width=248)
                                                  ->  Nested Loop
(cost=0.00..6.87 rows=1 width=190)
                                                        ->  Index Scan
using trip_pkey on trip trp cost=0.00..3.02 rows=1 width=24)
                                                        ->  Index Scan
using driver_pkey on driver dri  (cost=0.00..3.84 rows=1 width=166)
                                                  ->  Index Scan using
power_unit_pkey on power_unit pwu  (cost=0.00..3.88 rows=1 width=58)
                                            ->  Index Scan using
trailer_pkey on trailer trl  (cost=0.00..3.97 rows=1 width=58)
                                      ->  Index Scan using stp_trp_uid on
trip_stop stp  (cost=0.00..3.17 rows=12 width=8)
                                ->  Index Scan using txl_stp_uid on
transloading txl  (cost=0.00..253.05 rows=296 width=28)
                          ->  Index Scan using freight_group_pkey on
freight_group fgp  (cost=0.00..3.01 rows=1 width=82)
                    ->  Index Scan using place_pkey on place plc1
(cost=0.00..3.90 rows=1 width=62)
              ->  Index Scan using geo_location_pkey on geo_location loc1
(cost=0.00..3.68 rows=1 width=4)
        ->  Index Scan using place_pkey on place plc2  (cost=0.00..3.90
rows=1 width=58)
  ->  Index Scan using customer_order_pkey on customer_order ord
(cost=0.00..3.01 rows=1 width=4)

Explain shows rows but just to state, a few tables have ~300,000 rows and
the rest are well under 100,000.

we are running:
PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96

And we do a 'vacuum analyze' nightly.

Postmaster log does indicate Edge Combination Crossover but nothing else.

To us, the key issue is that eliminating any one table to bring total size
of query down to 10 tables makes things run fine.  At 11 tables, Explain
would seem to indicate things are still good but actual query time is
really bad.  Any help would be much appreciated.

Marc Mitchell - Senior Application Architect
Enterprise Information Solutions, Inc.
Downers Grove, IL 60515
marcm@eisolution.com



Re: Query performanc issue - too many table?

От
Robert Treat
Дата:
This is really just a standard "can you post explain analyze output"
response, though I am wondering if might help you to use explicit joins
on some of these tables.

Robert Treat

On Thu, 2002-11-21 at 11:59, Marc Mitchell wrote:
> I am having a problem with the below SQL taking an extreme amount of time
> to run.  The problem is that the explain looks great with all index scans.
> But the query itself takes minutes to run.  The query contains 11 tables.
> We've found that by dropping any one table, performance reverts to being
> nearly instantaneous.
>
> Here is my SQL:
>
> SELECT ..... (20 or so simple columns from all tables)
> FROM
>     trip TRP,
>     power_unit PWU,
>     driver DRI,
>     trailer TRL,
>     trip_stop STP,
>     transloading TXL,
>     freight_group FGP,
>     customer_order ORD,
>     place PLC1,
>     geo_location LOC1,
>     place PLC2
> WHERE
>     TRP.trp_uid = '51972' AND
>     TRP.pwu_uid=PWU.pwu_uid AND
>     TRP.main_dri_uid=DRI.dri_uid AND
>     TRP.trl_uid=TRL.trl_uid AND
>     TRP.trp_uid=STP.trp_uid AND
>     STP.stp_uid=TXL.stp_uid AND
>     TXL.fgp_uid=FGP.fgp_uid AND
>     FGP.ord_uid=ORD.ord_uid AND
>     FGP.ship_plc_uid = PLC1.plc_uid AND
>     PLC1.loc_uid = LOC1.loc_uid AND
>     FGP.cons_plc_uid = PLC2.plc_uid;
>
> Here is the EXPLAIN:
>
> Nested Loop  (cost=0.00..1404.92 rows=8 width=552)
>   ->  Nested Loop  (cost=0.00..1381.54 rows=8 width=548)
>         ->  Nested Loop  (cost=0.00..1351.34 rows=8 width=490)
>               ->  Nested Loop  (cost=0.00..1322.79 rows=8 width=486)
>                     ->  Nested Loop  (cost=0.00..1292.60 rows=8 width=424)
>                           ->  Nested Loop  (cost=0.00..1256.01 rows=12
> width=342)
>                                 ->  Nested Loop  (cost=0.00..18.06 rows=5
> width=314)
>                                       ->  Nested Loop  (cost=0.00..14.74
> rows=1 width=306)
>                                             ->  Nested Loop
> (cost=0.00..10.76 rows=1 width=248)
>                                                   ->  Nested Loop
> (cost=0.00..6.87 rows=1 width=190)
>                                                         ->  Index Scan
> using trip_pkey on trip trp cost=0.00..3.02 rows=1 width=24)
>                                                         ->  Index Scan
> using driver_pkey on driver dri  (cost=0.00..3.84 rows=1 width=166)
>                                                   ->  Index Scan using
> power_unit_pkey on power_unit pwu  (cost=0.00..3.88 rows=1 width=58)
>                                             ->  Index Scan using
> trailer_pkey on trailer trl  (cost=0.00..3.97 rows=1 width=58)
>                                       ->  Index Scan using stp_trp_uid on
> trip_stop stp  (cost=0.00..3.17 rows=12 width=8)
>                                 ->  Index Scan using txl_stp_uid on
> transloading txl  (cost=0.00..253.05 rows=296 width=28)
>                           ->  Index Scan using freight_group_pkey on
> freight_group fgp  (cost=0.00..3.01 rows=1 width=82)
>                     ->  Index Scan using place_pkey on place plc1
> (cost=0.00..3.90 rows=1 width=62)
>               ->  Index Scan using geo_location_pkey on geo_location loc1
> (cost=0.00..3.68 rows=1 width=4)
>         ->  Index Scan using place_pkey on place plc2  (cost=0.00..3.90
> rows=1 width=58)
>   ->  Index Scan using customer_order_pkey on customer_order ord
> (cost=0.00..3.01 rows=1 width=4)
>
> Explain shows rows but just to state, a few tables have ~300,000 rows and
> the rest are well under 100,000.
>
> we are running:
> PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96
>
> And we do a 'vacuum analyze' nightly.
>
> Postmaster log does indicate Edge Combination Crossover but nothing else.
>
> To us, the key issue is that eliminating any one table to bring total size
> of query down to 10 tables makes things run fine.  At 11 tables, Explain
> would seem to indicate things are still good but actual query time is
> really bad.  Any help would be much appreciated.
>
> Marc Mitchell - Senior Application Architect
> Enterprise Information Solutions, Inc.
> Downers Grove, IL 60515
> marcm@eisolution.com
>



Re: Query performanc issue - too many table?

От
Tom Lane
Дата:
"Marc Mitchell" <marcm@eisolution.com> writes:
> I am having a problem with the below SQL taking an extreme amount of time
> to run.  The problem is that the explain looks great with all index scans.
> But the query itself takes minutes to run.  The query contains 11 tables.
> We've found that by dropping any one table, performance reverts to being
> nearly instantaneous.

11 tables is the default GEQO threshold, so I'm wondering if the GEQO
planner is missing the best plan.  It's hard to tell much though without
seeing plans for *both* queries you are comparing.  EXPLAIN ANALYZE
output would be much more useful than just EXPLAIN, too.

            regards, tom lane

Re: Query performanc issue - too many table?

От
"Marc Mitchell"
Дата:
Please excuse the attachment but these EXPLAIN ANALYSE were getting so
wide, email was making it hard to fight word-wrap.

The attached output is from 3 consecutive EXPLAIN ANALYSEs of the exact
same query.  Each has a merge join floating somewhere within the query.
This has to be the culprit in terms of performance as this should be a
straightforward (albeit lengthy) step ladder keyed join query.  The
question then becomes why chose to do merge.

Based on Tom's comment about GEQC, we then did a "set geqc to false" and
ran the query again and got great results.  They too are included in the
log.

So, is GEQC broken or just misconfigured on our box?  If the latter, what
is the proper config?  We've made no changes from the default settings?  If
the former, can I simply shut it off?  Is the only time this comes into
play equate to the number of times I see the debug message appear in the
postmaster log?

Marc

----- Original Message -----
From: "Tom Lane" <tgl@sss.pgh.pa.us>
To: "Marc Mitchell" <marcm@eisolution.com>
Cc: <pgsql-admin@postgresql.org>
Sent: Thursday, November 21, 2002 1:45 PM
Subject: Re: [ADMIN] Query performanc issue - too many table?


> "Marc Mitchell" <marcm@eisolution.com> writes:
> > I am having a problem with the below SQL taking an extreme amount of
time
> > to run.  The problem is that the explain looks great with all index
scans.
> > But the query itself takes minutes to run.  The query contains 11
tables.
> > We've found that by dropping any one table, performance reverts to
being
> > nearly instantaneous.
>
> 11 tables is the default GEQO threshold, so I'm wondering if the GEQO
> planner is missing the best plan.  It's hard to tell much though without
> seeing plans for *both* queries you are comparing.  EXPLAIN ANALYZE
> output would be much more useful than just EXPLAIN, too.
>
> regards, tom lane

Вложения

Re: Query performanc issue - too many table?

От
Tom Lane
Дата:
> The attached output is from 3 consecutive EXPLAIN ANALYSEs of the exact
> same query.  Each has a merge join floating somewhere within the query.
> This has to be the culprit in terms of performance as this should be a
> straightforward (albeit lengthy) step ladder keyed join query.

It's not the merge join's fault; it's a question of a poorly chosen join
order.  I'm kinda surprised that GEQO didn't manage to find a better
one, but your example shows that its odds of doing so aren't very good
in this example.  (I wonder whether GEQO shouldn't be augmented with
some heuristics, so that it pays some attention to which relations have
WHERE-clause links to which other ones, or restriction clauses that
would make them useful to visit first.  Right now I think it's a pretty
blind search...)

> So, is GEQC broken or just misconfigured on our box?  If the latter, what
> is the proper config?  We've made no changes from the default settings?  If
> the former, can I simply shut it off?  Is the only time this comes into
> play equate to the number of times I see the debug message appear in the
> postmaster log?

I wouldn't shut it off.  I would suggest raising the geqo_threshold a
little bit, if you do a lot of 11-table joins, and you don't find the
planning time unreasonable at 11 tables.

Another possibility is to use explicit JOIN syntax to constrain the join
order into a good one; that would save you planning time as well.  See
the manual.

            regards, tom lane

Re: Cluster Database

От
Chris Miles
Дата:
On Thu, Nov 21, 2002 at 11:17:41AM -0500, Al-Karim Bhamani (LCL) wrote:
> Can Postgresql run in an clustered environment
> as Oracle runs as parallel server or RAC in 9i.

Depends what you mean by 'clustered'.  We run PostgreSQL in
a HA environment (which some might call a 'cluster') with
2 PostgreSQL servers running their own PostgreSQL instance,
but with the ability to automatically failover to their
partner using Heartbeat (from linux-ha.org).

The data lives on NFS servers and so is available to both
hosts.

Consider this a "poor man's HA cluster", altho it works
reasonably well, and all software elements are free.

Cheers,
CM

--
Chris Miles
chris_pg002@psychofx.com

Applictaion data Logging

От
Jesus Sandoval
Дата:
Hi, I will do this programatically but I wished to post this question just to
know if somebody have any suggestion, because I am new to server side functions
or rules or trigger programming.

I want to log any data change in one application, for audit reasons, I created a
table for that purpose the table has the next fields:

id SERIAL                                        -- Primary key
tablename VARCHAR(40)        -- Table name that changed
fieldname VARCHAR(40)        -- fieldname that changed in the table
user VARCHAR(16)                    -- User that made the change
timedate TIMESTAMP WITHOUT TIME ZONE    -- Timestamp of change
keyvalue TEXT            -- The key value of the record that changed
oldcontent TEXT        -- the field's content before the change
newcontent TEXT        -- the field's new content

I'm programming in Java and using JDBC.

Notes:
1) The program knows the user id that made the change (postgres is not aware of
this user id, because there is one user that connect from the application, after
the connection to postgresql, the applications validate the user id from a
postgresql table, if it is ok, the program continues).
2) In order to know wich record is updated (or inserted or deleted) the keyvalue
is saved in the log record.
3) Maybe newcontent is redundant, because I can check that value in the actual
table, so maybe it can be dropped from this table.

As I said, I plan to do this log from the program, but I think it can be done
with Triggers or rules, but I don't know if it is possibly or how to do the
following.

1) How to pass the user id from the application to postgres.
2) How to save the tablename of the table that is modifying in the logtable, the
same for the fieldname.
3) On one update there may be 2 or more fields (columns) changing, so I need to
store 2 or more records to the logtable, How can I do that.
4) to get the key value of the changed record. One query can change many records
and many changes.
5) Which is best option: Triggers, Rules or application logic???? (none of
them????)

Thanks

Jesus Sandoval



Re: Applictaion data Logging

От
Harald Krake
Дата:
Hi!

when it comes to portability the right place to implement such
auditing is in the OR-mapper.
Most commercial mappers already provide such a feature.
However, if you don't like "black-boxed" third party
software, writing your own mapper isn't a big deal.
It's worth the effort!

Some hints:
- writing a mapper usually ends up with methods like
  MySpecialDbObject.save() or insert() or update() or alike.
- your mapper should provide a method to turn on/off logging
  on a per class basis.
- one solution to the auditing problem is an extra "logging table"
  per "data table". The logging table holds _all_ the columns of the
  data table _plus_ a serial number (incremented each time the
  corresponding data tuple is modified) and things like userid/name,
  timestamp and so on.
- you should store the serial-number in the data-table too.

At least, this is how we did it and it works fine.
If you don't mind portability postgres probably provides some
auditing support, but I don't know.
Anyway, you should either use a dbms-inherent feature _or_ do it
in your application. Mixing both levels is not a good idea, imho.

Hope it helps,
Harald.


Re: Applictaion data Logging

От
Jesus Sandoval
Дата:
Harald Krake escribió:

> Hi!
>
> when it comes to portability the right place to implement such
> auditing is in the OR-mapper.
> Most commercial mappers already provide such a feature.
> However, if you don't like "black-boxed" third party
> software, writing your own mapper isn't a big deal.
> It's worth the effort!
>
> Some hints:
> - writing a mapper usually ends up with methods like
>   MySpecialDbObject.save() or insert() or update() or alike.
> - your mapper should provide a method to turn on/off logging
>   on a per class basis.
> - one solution to the auditing problem is an extra "logging table"
>   per "data table". The logging table holds _all_ the columns of the
>   data table _plus_ a serial number (incremented each time the
>   corresponding data tuple is modified) and things like userid/name,
>   timestamp and so on.
> - you should store the serial-number in the data-table too.
>
> At least, this is how we did it and it works fine.
> If you don't mind portability postgres probably provides some
> auditing support, but I don't know.
> Anyway, you should either use a dbms-inherent feature _or_ do it
> in your application. Mixing both levels is not a good idea, imho.
>
> Hope it helps,
> Harald.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

Thanks for your answer.

I don't know what is OR-mapper, I tried to look for some information in the
internet with www.google.com, but got no luck.

Anyway because the rest of your answer, I can fegure out what a mapper is,
and how to implement such feature.

I have some thougths that want to share for your opinion:

1) I already have doInsert(), doUpdate() and doDelete() methods in my
classes, so there is where I have to do the logging (the mapper action I
think)

2) Because the logging action is in these methods, I'm not sure about the
convenience to turn on/off the data logging, but anyway it can be done.

3) In my design of data logging, I only have one "logging table" for all
the application, but this logging table has to have the table name and the
column name of what is being changed, inserted or deteled, like the
following:

id SERIAL                                        -- Primary key
tablename VARCHAR(40)        -- Table name that changed
fieldname VARCHAR(40)        -- fieldname that changed in the table
user VARCHAR(16)                    -- User that made the change
timedate TIMESTAMP WITHOUT TIME ZONE    -- Timestamp of change
keyvalue TEXT            -- The key value of the record that changed
oldcontent TEXT        -- the field's content before the change
newcontent TEXT        -- the field's new content

I'm not sure if the primary key (id SERIAL) is used in the way you
suggested the serial number???

4) What utility can have storing the serial number in the data table????

I know that my design can be more useful in my application and yours in
your application but I want to know your comments.

Another design that I'm aware, every time somebody updated the tuple, a new
tuple where written to the table with a timestamp, so if I wanted to know
the actual value I just get the record with the greatest timestamp, and if
I wanted to know the history I can see all the tuples with the same ke
value (not the primary key, because in this design, the primary key always
was a SERIAL value).

Thanks again...


Re: Applictaion data Logging

От
Harald Krake
Дата:
On Thursday 12 December 2002 01:35 am, Jesus Sandoval wrote:

> 3) In my design of data logging, I only have one "logging table" for all
> the application, but this logging table has to have the table name and the
> column name of what is being changed, inserted or deteled, like the
> following:
>
> id SERIAL                                        -- Primary key
> tablename VARCHAR(40)        -- Table name that changed
> fieldname VARCHAR(40)        -- fieldname that changed in the table
> user VARCHAR(16)                    -- User that made the change
> timedate TIMESTAMP WITHOUT TIME ZONE    -- Timestamp of change
> keyvalue TEXT            -- The key value of the record that changed
> oldcontent TEXT        -- the field's content before the change
> newcontent TEXT        -- the field's new content
>
> I'm not sure if the primary key (id SERIAL) is used in the way you
> suggested the serial number???

no. In your example, you're using SERIAL as a dbms-generated object-ID whereas
the "serial" I meant (think of it as a "version-number") is an integer
that gets incremented each time the tuple is modified.

> 4) What utility can have storing the serial number in the data table????

for portability reasons you should do that at the OR-level, i.e. in
your doUpdate(). Again for portability I would suggest not to use
the "SERIAL"-type for "id". The same functionality can be achieved
via an extra table that holds the last valid object-ID. Whenever the mapper
needs to create a new object, it simply increments the number in that
table, sets it as the id in the new object (and the serial resp. version to 1)
and inserts the record (in a transaction, of course). Nice side-effect:
the object id will be unique among _all_ tables which can be used
for other purposes as well.

> Another design that I'm aware, every time somebody updated the tuple, a new
> tuple where written to the table with a timestamp, so if I wanted to know
> the actual value I just get the record with the greatest timestamp,

this design doesn't work well with some dbms (slow joins)

> and if
> I wanted to know the history I can see all the tuples with the same ke
> value (not the primary key, because in this design, the primary key always
> was a SERIAL value).

I see 2 probs with this design:
- there is one logging record for each modified field in a tuple,
  i.e. if 20 fields were modified, you generate 20 records!
- the original datatype is lost.

Well, it depends on your demands.

Harald.


(btw.: is this the proper list for this thread? ;-)