Re: Internal key management system

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: Internal key management system
Дата
Msg-id CA+fd4k5o+PXCZLsJ5jmPfX2azkWQX7aNrrqY2jyX08k+TRpdWw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Internal key management system  (Fabien COELHO <coelho@cri.ensmp.fr>)
Список pgsql-hackers
On Wed, 3 Jun 2020 at 16:16, Fabien COELHO <coelho@cri.ensmp.fr> wrote:
>
>
> Hello Masahiko-san,
>
> > This key manager is aimed to manage cryptographic keys used for
> > transparent data encryption. As a result of the discussion, we
> > concluded it's safer to use multiple keys to encrypt database data
> > rather than using one key to encrypt the whole thing, for example, in
> > order to make sure different data is not encrypted with the same key
> > and IV. Therefore, in terms of TDE, the minimum requirement is that
> > PostgreSQL can use multiple keys.
> >
> > Using multiple keys in PG, there are roughly two possible designs:
> >
> > 1. Store all keys for TDE into the external KMS, and PG gets them from
> > it as needed.
>
> +1

In this approach, encryption keys obtained from the external KMS are
directly used to encrypt/decrypt data. What KEK and DEK are you
referring to in this approach?

>
> > 2. PG manages all keys for TDE inside and protect these keys on disk
> > by the key (i.g. KEK) stored in the external KMS.
>
> -1, this is the one where you would need arguing.
>
> > There are pros and cons to each design. If I take one cons of #1 as an
> > example, the operation between PG and the external KMS could be
> > complex. The operations could be creating, removing and rotate key and
> > so on.
>
> ISTM that only create (delete?) are really needed. Rotating is the problem
> of the KMS itself, thus does not need to be managed by pg under #1.

With your idea how is the key rotation going to be performed? After
invoking key rotation on the external KMS we need to re-encrypt all
data encrypted with the old keys? Or you assume that the external KMS
employes something like 2-tier key hierarchy?

>
> > We can implement these operations in an extension to interact
> > with different kinds of external KMS, and perhaps we can use KMIP.
>
> I would even put that (KMIP protocol stuff) outside pg core.
>
> Even under #2, if some KMS is implemented and managed by pg, I would put
> the stuff in a separate process which I would probably run with a
> different uid, so that the KEK is not accessible directly by pg, ever.
>
> Once KMS interactions are managed with an outside process, then what this
> process does becomes an interface, and whether this process actually
> manages the keys or discuss with some external KMS with some KMIP or
> whatever is irrelevant to pg. Providing an interface means that anyone
> could implement their KMS fitting their requirements if they comply with
> the interface/protocol.

Just to be clear we don't keep KEK on neither shared memory nor disk.
Postmaster and a backend who executes pg_rotate_cluster_passphrase()
get KEK and use it to (re-)encrypt internal keys. But after that they
immediately free it. The encryption keys we need to store inside
PostgreSQL are DEK.

>
> Note that I'd be fine with having the current implementation somehow
> wrapped up as an example KMS.
>
> > But the development cost could become high because we might need
> > different extensions for each key management solutions/services.
>
> Yes and no. What I suggest is, I think, pretty simple, and I think I can
> implement it in a few line of script, so the cost is not high, and having
> a separate process looks, to me, like a security win and an extensibility
> win (i.e. another implementation can be provided).

How can we get multiple keys from the external KMS? I think we will
need to save something like identifiers for each encryption key
Postgres needs in the core and ask the external KMS for the key by the
identifier via an extension. Is that right?

>
> > #2 is better at that point; the interaction between PG and KMS is only
> > GET.
>
> I think that it could be the same with #1. I think that having a separate
> process is a reasonable security requirement, and if you do that #1 and #2
> are more or less the same.
>
> > Other databases employes a similar approach are SQL Server and DB2.
>
> Too bad for them:-) I'd still disagree with having the master key inside
> the database process, even if Microsoft, IBM and Oracle think it is a good
> idea.
>
> > In terms of the necessity of introducing the key manager into PG core,
> > I think at least TDE needs to be implemented in PG core. And as this
> > key manager is for managing keys for TDE, I think the key manager also
> > needs to be introduced into the core so that TDE functionality doesn't
> > depend on external modules.
>
> Hmmm.
>
> My point is that only interactions should be in core.
>
> The implementation could be in core, but as a separate process.
>
> I agree that pg needs to be able to manage the DEK, so it needs to store
> data keys.
>
> I still do not understand why an extension, possibly distributed with pg,
> would not be ok. There may be good arguments for that, but I do not think
> you provided any yet.

Hmm I think I don't fully understand your idea yet. With the current
patch, KEK could be obtained by either postmaster or backend processs
who execute pg_rotate_cluster_passphrase() and KEK isn't stored
anywhere on shared memory and disk. With your idea, KEK always is
obtained by the particular process by a way provided by an extension.
Is my understanding right?

>
> >> Also, I'm not at fully at ease with some of the underlying principles
> >> behind this proposal. Are we re-inventing/re-implementing kerberos or
> >> whatever? Are we re-implementing a brand new KMS inside pg? Why having
> >> our own?
> >
> > As I explained above, this key manager is for managing internal keys
> > used by TDE. It's not an alternative to existing key management
> > solutions/services.
>
> Hmmm. This seels to suggest that interacting with something outside should
> be an option.
>
> > The requirements of this key manager are generating internal keys,
> > letting other PG components use them, protecting them by KEK when
> > persisting,
>
> If you want that, I'd still argue that you should have a separate process.
>
> > and support KEK rotation. It doesn’t have a feature like
> > allowing users to store arbitrary keys into this key manager, like
> > other key management solutions/services have.
>
> Hmmm.
>
> > I agree that the key used to encrypt data must not be placed in the
> > same host. But it's true only when the key is not protected, right?
>
> The DEK is needed when encrypting and decrypting, obviously, so it would
> be there once obtained, it cannot be helped. My concern is about the KEK,
> which AFAICS in your code is somewhere in memory accessible by the
> postgres process, which is a no go for me.

No. In the current patch, we don't save KEK anywhere on shared memory
and disk. Once a process (postmaster or backend) used KEK stored in a
PgAeadCtx it frees this context. We put the internal keys, DEK, in the
shared buffer during startup.

>
> The definition of "protected" is fuzzy, it would depend on what the user
> requires. Maybe protected for someone is "in a file which is only readable
> by postgres", and for someone else it means "inside an external hardware
> components activated by the fingerprint of the CEO".
>
> > In
> > this key manager, since we protect all internal keys by KEK it's no
> > problem unless KEK is leaked. KEK can be obtained from outside key
> > management solutions/services through cluster_passphrase_command.
>
> Again, I do not think that the KEK should be in postgres process, ever.
>
> >>
> >> Also, implementing a crash-safe key rotation algorithm does not look like
> >> inside pg backend, that is not its job.
> >
> > The key rotation this key manager has is KEK rotation, which is very
> > important. Without KEK rotation, when KEK is leaked an attacker can
> > get database data by disk theft. Since KEK is responsible for
> > encrypting all internal keys it's necessary to re-encrypt the internal
> > keys when KEK is rotated. I think PG is the only role that can do that
> > job.
>
> I'm not claiming that KEK rotation is a bad thing, I'm saying that it
> should not be postgres problem. My issue is where you put the thing, not
> about the thing itself.
>
> > I think this key manager satisfies the fist point by
> > cluster_passphrase_command. For the second point, the key manager
> > stores local keys inside PG while protecting them by KEK managed
> > outside of PG.
>
> I do not understand. From what I understood from the code, the KEK is
> loaded into postgres process. That is what I'm disagreeing with, only
> needed DEK should be there.

Please refer to kmgr_verify_passphrase() that is responsible for
deriving KEK from passphrase, checking if the given passphrase is
correct by unwrapping the internal keys, and storing the internal keys
into the shared buffer:

+bool
+kmgr_verify_passphrase(char *passphrase, int passlen,
+    CryptoKey *keys_in, CryptoKey *keys_out, int nkeys)
+{
+ PgAeadCtx *tmpctx;
+ uint8 user_enckey[PG_AEAD_ENC_KEY_LEN];
+ uint8 user_hmackey[PG_AEAD_MAC_KEY_LEN];
+
+ /*
+ * Create temporary wrap context with encryption key and HMAC key extracted
+ * from the passphrase.
+ */
+ kmgr_derive_keys(passphrase, passlen, user_enckey, user_hmackey);
+ tmpctx = pg_create_aead_ctx(user_enckey, user_hmackey);
+
+ for (int i = 0; i < nkeys; i++)
+ {
+
+ if (!kmgr_unwrap_key(tmpctx, &(keys_in[i]), &(keys_out[i])))
+ {
+ /* The passphrase is not correct */
+ pg_free_aead_ctx(tmpctx);
+ return false;
+ }
+ }
+
+ /* The passphrase is correct, free the cipher context */
+ pg_free_aead_ctx(tmpctx);
+
+ return true;
+}

We free tmpctx having KEK immediately after use. Or your argument is
that we should not put KEK even onto a postgres process's local
memory?

Regards,

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: vignesh C
Дата:
Сообщение: Re: Parallel copy
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions