Обсуждение: PostgreSQL + IDS load/latency help

Поиск
Список
Период
Сортировка

PostgreSQL + IDS load/latency help

От
"Loki"
Дата:
Lists:

We have a PostgreSQL database full of over 500,000 events.
When the database has a small number of events, to
retrieve a specific event stored in that db takes a mere
few seconds.

However, as the database grew in size to where it is now,
it takes over 15-20 seconds to get information back from a
query to that database. We have tried everything,
including vacuum, which someone else recommended to me.

I noticed when running top(exec) on the system that the
PostgreSQL process eventually eats up ALL available RAM
(half a gig of RAM) and forces the system into scratch
space. When rebooting the machine the query time is
greatly improved back down to a reasonable query time,
(obviously because it hasnt eaten up all the memory yet.)

Am I correct in saying that 500,000 events shouldn't be a
problem? Is there anyone out there with more than 500,000
events in their DB than what we currently have?

Can anyone recommend anything to try or do to rectify the
situation. 15-20 seconds to get the information queried
from the database is unacceptable and can not work. Please
advise.

* We are running Snort 1.9.1 storing events to a custom
PostgreSQL database. Dumping the database and creating a
new one every so often is also not an option. Can anyone
provide assistance? Please advise.

For those needing more details on our setup, etc. I am not
currently able to log into the system. Please email me and
I can provide you more details offline.

Sincerely,

Eric Hines
Internet Warfare and Intelligence
Fate Research Labs
http://www.fatelabs.com


Re: PostgreSQL + IDS load/latency help

От
Tim Ellis
Дата:
On Thu, 2003-03-27 at 07:12, Loki wrote:
> We have a PostgreSQL database full of over 500,000 events.
> When the database has a small number of events, to
> retrieve a specific event stored in that db takes a mere
> few seconds.
>
> However, as the database grew in size to where it is now,
> it takes over 15-20 seconds to get information back from a
> query to that database. We have tried everything,
> including vacuum, which someone else recommended to me.

Sometimes it's the simple things. Did you do an explain on the query to
see its access plan? I don't see anything in your post about indexes.

> Am I correct in saying that 500,000 events shouldn't be a
> problem? Is there anyone out there with more than 500,000
> events in their DB than what we currently have?

PostgreSQL can give you speedy access into billions of rows of data
given proper indexing and maintenance. I'd like to see your query's
access plan.

--
Tim Ellis
Senior Database Architect and author, tedia2sql (http://tedia2sql.tigris.org)
If this helped you, http://svcs.affero.net/rm.php?r=philovivero


Re: PostgreSQL + IDS load/latency help

От
philo vivero
Дата:
> We have a PostgreSQL database full of over 500,000 events.
> When the database has a small number of events, to
> retrieve a specific event stored in that db takes a mere
> few seconds.
>
> However, as the database grew in size to where it is now,
> it takes over 15-20 seconds to get information back from a
> query to that database. We have tried everything,
> including vacuum, which someone else recommended to me.

Sometimes it's the simple things. Did you do an explain on the query to
see its access plan? I see nothing in your post about indexes.

> Am I correct in saying that 500,000 events shouldn't be a
> problem? Is there anyone out there with more than 500,000
> events in their DB than what we currently have?

With proper indexes, 500,000,000 "events" shouldn't be a problem.

--
pv
If this helped you, http://svcs.affero.net/rm.php?r=philovivero


Re: [GENERAL] PostgreSQL + IDS load/latency help

От
Dennis Gearon
Дата:
what do you mean by 'events'?

Loki wrote:
> Lists:
>
> We have a PostgreSQL database full of over 500,000 events. When the
> database has a small number of events, to retrieve a specific event
> stored in that db takes a mere few seconds.
> However, as the database grew in size to where it is now, it takes over
> 15-20 seconds to get information back from a query to that database. We
> have tried everything, including vacuum, which someone else recommended
> to me.
>
> I noticed when running top(exec) on the system that the PostgreSQL
> process eventually eats up ALL available RAM (half a gig of RAM) and
> forces the system into scratch space. When rebooting the machine the
> query time is greatly improved back down to a reasonable query time,
> (obviously because it hasnt eaten up all the memory yet.)
>
> Am I correct in saying that 500,000 events shouldn't be a problem? Is
> there anyone out there with more than 500,000 events in their DB than
> what we currently have?
>
> Can anyone recommend anything to try or do to rectify the situation.
> 15-20 seconds to get the information queried from the database is
> unacceptable and can not work. Please advise.
>
> * We are running Snort 1.9.1 storing events to a custom PostgreSQL
> database. Dumping the database and creating a new one every so often is
> also not an option. Can anyone provide assistance? Please advise.
>
> For those needing more details on our setup, etc. I am not currently
> able to log into the system. Please email me and I can provide you more
> details offline.
>
> Sincerely,
>
> Eric Hines
> Internet Warfare and Intelligence
> Fate Research Labs
> http://www.fatelabs.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>