Re: Slow transfer speeds

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: Slow transfer speeds
Дата
Msg-id 1154973945.20252.18.camel@state.g2switchworks.com
обсуждение исходный текст
Ответ на Slow transfer speeds  (hansell baran <hansellb@yahoo.com>)
Список pgsql-performance
On Mon, 2006-08-07 at 12:26, hansell baran wrote:
> Hi. I'm new at using PostgreSQL. I have found posts related to this
> one but there is not a definite answer or solution. Here it goes.
> Where I work, all databases were built with MS Access. The Access
> files are hosted by computers with Windows 2000 and Windows XP. A new
> server is on its way and only Open Source Software is going to be
> installed. The OS is going to be SUSE Linux 10.1 and we are making
> comparisons between MySQL, PostgreSQL and MS Access. We installed
> MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL
> DO NOT run at the same time)(There is one HDD for Windows and one for
> Linux)
> The "Test Server" in which we install the DBMS has the following
> characteristics:
>
> CPU speed = 1.3 GHz
> RAM = 512 MB
> HDD = 40 GB

Just FYI, that's not only not much in terms of server, it's not even
much in terms of a workstation.  My laptop is about on par with that.

Just sayin.

OK, just so you know, you're comparing apples and oranges.  A client
side application like access has little or none of the overhead that a
real database server has.

The advantage PostgreSQL has is that many people can read AND write to
the same data store simultaneously and the database server will make
sure that the underlying data in the files never gets corrupted.
Further, with proper constraints in place, it can make sure that the
data stays coherent (i.e. that data dependencies are honored.)

As you can imagine, there's gonna be some overhead there.  And it's
wholly unfair to compare a databases ability to stream out data in a
single read to access.  It is the worst case scenario.

Try having 30 employees connect to the SAME access database and start
updating lots and lots of records.  Have someone read out the data while
that's going on.  Repeat on PostgreSQL.

If you're mostly going to be reading data, then maybe some intermediate
system is needed, something to "harvest" the data into some flat files.

But if your users need to read out 500,000 rows, change a few, and write
the whole thing back, your business process is likely not currently
suited to a database and needs to be rethought.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Saranya Sivakumar
Дата:
Сообщение: Re: [NOVICE] 7.3.2 pg_restore very slow
Следующее
От: Markus Schaber
Дата:
Сообщение: Re: [NOVICE] 7.3.2 pg_restore very slow