Re: Performance for relative large DB

Поиск
Список
Период
Сортировка
От Chris Browne
Тема Re: Performance for relative large DB
Дата
Msg-id 60wtmcptzg.fsf@dba2.int.libertyrms.com
обсуждение исходный текст
Список pgsql-performance
"tobbe" <tobbe@tripnet.se> writes:
> The company that I'm working for are surveying the djungle of DBMS
> since we are due to implement the next generation of our system.
>
> The companys buissnes is utilizing the DBMS to store data that are
> accessed trough the web at daytime (only SELECTs, sometimes with joins,
> etc). The data is a collection of bjects that are for sale. The data
> consists of basic text information about theese togheter with some
> group information, etc.
>
> The data is updated once every night.

How much data is updated per night?  The whole 4M "posts"?  Or just
some subset?

> There are about 4 M posts in the database (one table) and is
> expected to grow with atleast 50% during a reasonable long time.

So you're expecting to have ~6M entries in the 'posts' table?

> How well would PostgreSQL fit our needs?
>
> We are using Pervasive SQL today and suspect that it is much to small.
> We have some problems with latency. Esp. when updating information,
> complicated conditions in selects and on concurrent usage.

If you're truly updating all 4M/6M rows each night, *that* would turn
out to be something of a bottleneck, as every time you update a tuple,
this creates a new copy, leaving the old one to be later cleaned away
via VACUUM.

That strikes me as unlikely: I expect instead that you update a few
thousand or a few tens of thousands of entries per day, in which case
the "vacuum pathology" won't be a problem.

I wouldn't expect PostgreSQL to be "too small;" it can and does cope
well with complex queries.

And the use of MVCC allows there to be a relatively minimal amount of
locking done even though there may be a lot of concurrent users, the
particular merit there being that you can essentially eliminate most
read locks.  That is, you can get consistent reports without having to
lock rows or tables.

One table with millions of rows isn't that complex a scenario :-).
--
output = ("cbbrowne" "@" "cbbrowne.com")
http://cbbrowne.com/info/spiritual.html
Appendium to  the Rules  of the  Evil Overlord #1:  "I will  not build
excessively integrated  security-and-HVAC systems. They  may be Really
Cool, but are far too vulnerable to breakdowns."

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: fake condition causes far better plan
Следующее
От: gokulnathbabu manoharan
Дата:
Сообщение: Caching by Postgres