Обсуждение: best way to kill long running query?
I want to allow some queries for my users to run for a prescribed period of time and kill them if they go over time. Is there a good way to do this? Or is this a bad idea? I've been struggling with trying to figure out the best way to allow users to browse through large tables. For example, I have one table with about 600,000 rows and growing at about 100,000/month. I want to allow users to browse through this table, but only if their effective SELECT statement only generates 100 or maybe 1000 rows. There are several fields that can be used in the WHERE clause, such as user, date, model, etc. It will be difficult for me to predict how large a result set is a priori. So I want to allow the query to run for a prescribed period of time, then kill it. I'll probably be using ADO --> ODBC at the client. So I could probably kill the Connection/Recordset. I just don't know the best way to do it. pgAdmin allows queries to be killed. How does it do it? Thanks in advance, Bill Eaton Thousand Oaks, CA
"Bill Eaton" <EE2@aeroantenna.com> writes: > I want to allow some queries for my users to run for a prescribed period of > time and kill them if they go over time. Is there a good way to do this? set statement_timeout perhaps? regards, tom lane
>> I want to allow some queries for my users to run for a prescribed period of >> time and kill them if they go over time. Is there a good way to do this? > set statement_timeout perhaps? Ooh. I like that. It would be absolutely brilliant if I could figure out how to get it to work with ADO and the Windoze ODBC driver. I've tried appending statement_timeout to my connection string i.e. ConnString = "DRIVER={PostgreSQL Unicode};SERVER=MYSERVER;DATABASE=MYDB;UID=client;set_timeout=1" but it has no effect on a SELECT statement that takes at least 3 or 4 seconds to execute and only returns 184 (out of 600,000) records. I've also tried different syntaxes to pass the parameter set_timeout=1 set_timeout='1' set_timeout=(1) set_timeout=('1')
Bill Eaton wrote: >>> I want to allow some queries for my users to run for a prescribed period > of >>> time and kill them if they go over time. Is there a good way to do this? > >> set statement_timeout perhaps? > > Ooh. I like that. It would be absolutely brilliant if I could figure out how > to get it to work with ADO and the Windoze ODBC driver. I've tried appending > statement_timeout to my connection string i.e. > ConnString = "DRIVER={PostgreSQL > Unicode};SERVER=MYSERVER;DATABASE=MYDB;UID=client;set_timeout=1" > but it has no effect on a SELECT statement that takes at least 3 or 4 > seconds to execute and only returns 184 (out of 600,000) records. > > I've also tried different syntaxes to pass the parameter > set_timeout=1 > set_timeout='1' > set_timeout=(1) > set_timeout=('1') I don't think you can set GUC parameters from the ODBC driver. Your options are: * postgresql.conf. Will apply to all sessions to the databse. * database. Use ALTER DATABLASE foo SET statement_timeout=<something>. Will then affect all connections to that database. * user. Use ALTER ROLE foo SET statement_timeout=<something>. Will then affect all connections from that user. * change your application to issue a "SET statement_timeout=<something>" query before anything else it sends. Note that statement timeout will cancel the whole command. It won't return "as many rows as it has reached by the timeout", it will return nothing at all. //Magnus
>>> I want to allow some queries for my users to run for a >>> prescribed period of time and kill them if they go over >>> time. Is there a good way to do this? >> set statement_timeout perhaps? > I don't think you can set GUC parameters from the ODBC driver. Your > options are: > > * postgresql.conf. Will apply to all sessions to the databse. > > * database. Use ALTER DATABLASE foo SET statement_timeout=<something>. > Will then affect all connections to that database. > > * user. Use ALTER ROLE foo SET statement_timeout=<something>. Will then > affect all connections from that user. > > * change your application to issue a "SET statement_timeout=<something>" > query before anything else it sends. > The last option is almost exactly what I wanted. It works quite nicely The only downside to this approach is that an error is raised if the timeout is exceeded. Which, when I think about it, is probably good behavior --> it allows me to distinguish between (1) a query that completed and returned no records and (2) a query that aborted because it exceeded the timeout. Thanks to all for the assist. Bill Eaton
On Mar 21, 2007, at 3:09 PM, Bill Eaton wrote: >>> I want to allow some queries for my users to run for a prescribed >>> period > of >>> time and kill them if they go over time. Is there a good way to >>> do this? > >> set statement_timeout perhaps? > > Ooh. I like that. It would be absolutely brilliant if I could > figure out how > to get it to work with ADO and the Windoze ODBC driver. I've tried > appending > statement_timeout to my connection string i.e. > ConnString = "DRIVER={PostgreSQL > Unicode};SERVER=MYSERVER;DATABASE=MYDB;UID=client;set_timeout=1" > but it has no effect on a SELECT statement that takes at least 3 or 4 > seconds to execute and only returns 184 (out of 600,000) records. > > I've also tried different syntaxes to pass the parameter > set_timeout=1 > set_timeout='1' > set_timeout=(1) > set_timeout=('1') that doesn't look like "statement_timeout" to me, but then my glasses might be out of date. try this as postgres superuser for your user: alter user foobar set statement_timeout=1; where foobar is the user you connect as. then this user's default statement_timeout is set.... he can override it at will, though.
Вложения
Bill/Magnus/Tom No query should be running on ad inifinitum Take a look at http://euler.slu.edu/~goldwasser/courses/slu/csa341/2003_Fall/lectures/oracle_optimizer/#optimizer 99% of the queries I see I can optimise by application of these simple rules do an explain plan understand everything that this tells you Make the appropriate adjustments (even it involves building an index) Take some advice from the undisputed king of oracle optimisation http://www.dba-oracle.com/articles.htm Keep me apprised, Martin-- --------------------------------------------------------------------------- This e-mail message (including attachments, if any) is intended for the use of the individual or entity to which it is addressedand may contain information that is privileged, proprietary , confidential and exempt from disclosure. If you arenot the intended recipient, you are notified that any dissemination, distribution or copying of this communication isstrictly prohibited. --------------------------------------------------------------------------- Le présent message électronique (y compris les pièces qui y sont annexées, le cas échéant) s'adresse au destinataire indiquéet peut contenir des renseignements de caractère privé ou confidentiel. Si vous n'êtes pas le destinataire de ce document,nous vous signalons qu'il est strictement interdit de le diffuser, de le distribuer ou de le reproduire. ----- Original Message ----- From: "Bill Eaton" <EE2@aeroantenna.com> To: "Magnus Hagander" <magnus@hagander.net> Cc: "Tom Lane" <tgl@sss.pgh.pa.us>; <pgsql-general@postgresql.org> Sent: Wednesday, March 21, 2007 4:04 PM Subject: Re: [GENERAL] best way to kill long running query? >>>> I want to allow some queries for my users to run for a >>>> prescribed period of time and kill them if they go over >>>> time. Is there a good way to do this? > >>> set statement_timeout perhaps? >> I don't think you can set GUC parameters from the ODBC driver. Your >> options are: >> >> * postgresql.conf. Will apply to all sessions to the databse. >> >> * database. Use ALTER DATABLASE foo SET statement_timeout=<something>. >> Will then affect all connections to that database. >> >> * user. Use ALTER ROLE foo SET statement_timeout=<something>. Will then >> affect all connections from that user. >> >> * change your application to issue a "SET statement_timeout=<something>" >> query before anything else it sends. >> > > The last option is almost exactly what I wanted. It works quite nicely The > only downside to this approach is that an error is raised if the timeout is > exceeded. Which, when I think about it, is probably good behavior --> it > allows me to distinguish between (1) a query that completed and returned no > records and (2) a query that aborted because it exceeded the timeout. > > Thanks to all for the assist. > > Bill Eaton > > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match >
On Mar 21, 11:36 am, E...@aeroantenna.com ("Bill Eaton") wrote: > I want to allow some queries for my users to run for a prescribed period of > time and kill them if they go over time. Is there a good way to do this? Or > is this a bad idea? > > I've been struggling with trying to figure out the best way to allow users > to browse through large tables. For example, I have one table with about > 600,000 rows and growing at about 100,000/month. > > I want to allow users to browse through this table, but only if their > effective SELECT statement only generates 100 or maybe 1000 rows. There are > several fields that can be used in the WHERE clause, such as user, date, > model, etc. It will be difficult for me to predict how large a result set is > a priori. So I want to allow the query to run for a prescribed period of > time, then kill it. > > I'll probably be using ADO --> ODBC at the client. So I could probably kill > the Connection/Recordset. I just don't know the best way to do it. pgAdmin > allows queries to be killed. How does it do it? > > Thanks in advance, > > Bill Eaton > Thousand Oaks, CA > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org/ You could use "limit" to set the max returned result set allowed when you put together the query. Travis