Re: backup manifests
От | Robert Haas |
---|---|
Тема | Re: backup manifests |
Дата | |
Msg-id | CA+TgmoYtrdoMSaG0fxP+i8XK4jZhuVgoG74Pge8ARhnHLC4F_Q@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: backup manifests (David Fetter <david@fetter.org>) |
Список | pgsql-hackers |
On Thu, Jan 2, 2020 at 1:03 PM David Fetter <david@fetter.org> wrote: > I believe jq has an excellent one that's available under a suitable > license. > > Making jq a dependency seems like a separate discussion, though. At > the moment, we don't use git tools like submodel/subtree, and deciding > which (or whether) seems like a gigantic discussion all on its own. Yep. And it doesn't seem worth it for a relatively small feature like this. If we already had it, it might be worth using for a relatively small feature like this, but that's a different issue. > > (b) introducing such a dependency for a minor feature like this > > seems fairly unpalatable to me, and it'd probably still be more code > > than just using a tab-separated file. I'd be willing to do (3) if > > somebody could explain to me how to solve the problems with porting > > that code to work on the frontend side, but the only suggestion so > > far as to how to do that is to port memory contexts, elog/report, > > and presumably encoding handling to work on the frontend side. > > This port has come up several times recently in different contexts. > How big a chunk of work would it be? Just so we're clear, I'm not > suggesting that this port should gate this feature. I don't really know. It's more of a research project than a coding project, at least initially, I think. For instance, psql has its own non-local-transfer-of-control mechanism using sigsetjmp(). If you wanted to introduce elog/ereport on the frontend, would you make psql use it? Or just let psql continue to do what it does now and introduce the new mechanism as an option for code going forward? Or try to make the two mechanisms work together somehow? Will you start using the same error codes that we use in the backend on the frontend side, and if so, what will they do, given that what the backend does is just embed them in a protocol message that any particular client may or may not display? Similarly, should frontend errors support reporting a hint, detail, statement, or query? Will it be confusing if backend and frontend errors are too similar? If you make memory contexts available in the frontend, what if any code will you adapt to use them? There's a lot of stuff in src/bin. If you want the encoding machinery on the front end, what will you use in place of the backend's idea of the "database encoding"? What will you do about dependencies on Datum in frontend code? Somebody would need to study all this stuff, come up with a tentative set of decisions, write patches, get it all working, and then quite possibly have the choices they made get second-guessed by other people who have different ideas. If you come up with a really good, clean proposal that doesn't provoke any major disagreements, you might be able to get this done in a couple of months. If you can't come up with something people good, or if you're the only one who thinks what you come up with is good, it might take years. It seems to me that in a perfect world a lot of the code we have in the backend that is usefully reusable in other contexts would be structured so that it doesn't have random dependencies on backend-only machinery like memory contexts and elog/ereport. For example, if you write a function that returns an error message rather than throwing an error, then you can arrange to call that from either frontend or backend code and the caller can do whatever it wishes with that error text. However, once you've written your code so that an error gets thrown six layers down in the call stack, it's really hard to rearrange that so that the error is returned, and if you are populating not only the primary error message but error code, detail, hint, etc. it's almost impractical to think that you can rearrange things that way anyway. And generally you want to be populating those things, as a best practice for backend code. So while in theory I kind of like the idea of adapting the JSON parser we've already got to just not depend so heavily on a backend environment, it's not really very clear how to actually make that happen. At least not to me. > > That seems to me to be an unreasonably large lift, especially given > > that we have lots of other files that use ad-hoc formats already, > > and if somebody ever gets around to converting all of those to JSON, > > they can certainly convert this one at the same time. > > Would that require some kind of file converter program, or just a > really loud notice in the release notes? Maybe neither. I don't see why it wouldn't be possible to be backward-compatible just by keeping the old code around and having it parse as far as the version number. Then it could decide to continue on with the old code or call the new code, depending. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления:
Следующее
От: Robert HaasДата:
Сообщение: Re: \d is not showing global(normal) table info if we createtemporary table with same name as global table