Re: Isolation of multiple databse instances provided by a singlepostgres server

Поиск
Список
Период
Сортировка
On 11/21/19 5:55 AM, stan wrote:
On Wed, Nov 20, 2019 at 04:24:40PM -0600, Ron wrote:
On 11/20/19 4:03 PM, stan wrote:
I am working on a fairly small application to use for managing a companies
business.

I have a "production" instance hosted by one of the cloud providers, and 2
other instances. This is fairly new to me. In the past, I have created
applications by keeping a set of scripts that can be used to rebuild the
database, and pg_dump to restore the date. Based on some recommendations I
am using pg_basebackup to backup the production instance nightly. My
background is primarily Oracle. I realize looking at the way pg_basebackup
works that multiple database instances, provided by one server are actually
stored in the same physical OS files.


We have traditionally (in the Postgres world) had a sandbox, that we used
for upgrades, and testing development methodologies, and this seems to be
supported pretty well by pg_dump.

Now that I know "too much" I am concerned about hosting the sandbox on the
same Postgres instance.
What specifically do you mean by "instance"??? (I know what it means in the
SQL Server world, and in Postgres all the databases accessible via a single
$PGDATA are called a *cluster*.)
Sorry for my incorrect terminology. I am probably confused about the
technology here. Let me try to explain what I think I understand.

It seems to me that I can have one Postgres "server" running listening on a
single port on a single machine. It appears that the data files for this
"server" are managed internally by the Postgres server instance, and I
have no control of what is stored where in them. In an Oracle world, I can
create tablespaces, which have a relationship to OS files, and I can
explicitly control what objects are stored in which tablespaces (OS file),

Same in Postgres.

https://www.postgresql.org/docs/9.6/sql-createtablespace.html

CREATE TABLESPACE tablespace_name   [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]   LOCATION 'directory'   [ WITH ( tablespace_option = value [, ... ] ) ]


thus, for example, when I do a hot backup, I put a specific tablespaces in
backup mode, and can then safely copy this OS file (yes I have to properly
deal with archive logs). Thus I would be somewhat comfortable have to
distinct "instance: provided by that one Oracle "server".

There's no ability to backup a single tablespace in Postgres, because the purpose behind them is not the same in Oracle and Postgres.


It appears to me that, within this one Postgres "instance", there are 2
levels of "isolation", which are database, and schemas. Is this correct? 

Yes, but ... schema in Postgres are different from Oracle schema.  In Postgres, the CREATE SCHEMA command creates a schema, whereas CREATE USER creates a schema in Oracle.

If
so, how does this cores pond to physical on disk storage?

It corresponds not at all.

The directory that a table's files go in is solely dependent on the tablespace it lives in (typically "pg_default", who's location is $PGDATA).

Recognizing that this is a fairly small application, what are wiser folks
than I recommendations?

Should I run the sandbox from different Postgres server, possibly even on a
different machine? Is pg_dump still  good way to move the production
instance to the sandbox, and perhaps even the other way around?
Running CAT, STG, UAT, DEV, etc on different VMs is certainly one solution,
isolating them from each other.
 Makes sense.
OTOH, you can initdb multiple clusters on the same host, accessing them via
different $PGDATA variables and port numbers.
That is consistent with what I thought I understood.

Thanks for taking time to educate me.

--
Angular momentum makes the world go 'round.

В списке pgsql-general по дате отправления:

Предыдущее
От: stan
Дата:
Сообщение: Re: Help with authentication on Debain/Ubuntu installation
Следующее
От: "Jason L. Amerson"
Дата:
Сообщение: Remote Connection Help