Обсуждение: BLOBs
The following statement is excerpted from the
"Oracle to Postgres Conversion"
on Postgres 8.4.
BLOBs
Binary large object support in Postgres is very poor and unsuitable for use in a 24/7 environment, because you can't dump them with pg_dump. Backing up a database that makes use of Postgres large objects requires one to knock down the RDBMS and dump the files in the database directory.
Don Baccus put together a hack that extends AOLserver's postgres driver with BLOB-like support, by uuencoding/decoding binary files before stuffing them into or extracting them from the database. The resulting objects can be consistently dumped by "pg_dump" while the RDBMS is up and running. There is no need to interrupt service while making your backup.
To get around the one-block limit on the size of a tuple imposed by Postgres, the driver segments the encoded data into 8K chunks.
Postgres large objects are scheduled for a major overhaul in summer 2000. Because of this, only the BLOB functionality used by the ACS was implemented.
Now for Postgres 9.4.5, is backup of the BLOB still an issue as described above, OR, it now works equivalent to that of Oracle?
Thanks
Eugene
Eugene Yin <eugeneymail@ymail.com> writes: > The following statement is excerpted from the > "Oracle to Postgres Conversion" > https://wiki.postgresql.org/wiki/Oracle_to_Postgres_Conversion#Grammar_Differences My goodness, that's out of date (as you should have guessed from the reference to "scheduled for an overhaul in summer 2000"). pg_dump has been able to dump large objects just fine since 8.1 or so. I don't know what else an Oracle user might be expecting that we don't have, though, so I'm hesitant to change the text. regards, tom lane
I use the BLOB in an Oracle table to store IMG and document files. Now for Postgres(9.4.5), I have two options, i.e., BYTEA or OID.
With consideration of passing the params (SAVING) from the Java side as follows:
DiskFileItemDeepy file = myFile; InputStream is = null; long fileSize = 0; if (file != null && file.getFileSize() > 0){ is = file.getInputStream(); fileSize = file.getFileSize(); call.setBinaryStream(1, (InputStream)is, (long)fileSize);
}
...
call.execute();
//When retrieve the data use:
java.sql.Blob blob = (Blob) resultSet.getBlob(tableColumnName);
For the purpose mentioned above, which Postgres data type is a better candidate for replacement the BLOB, BYTEA or OID?
Thanks
Eugene
On Saturday, January 9, 2016 11:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Eugene Yin <eugeneymail@ymail.com> writes:
> The following statement is excerpted from the
> "Oracle to Postgres Conversion"
> https://wiki.postgresql.org/wiki/Oracle_to_Postgres_Conversion#Grammar_Differences
My goodness, that's out of date (as you should have guessed from the
reference to "scheduled for an overhaul in summer 2000"). pg_dump
has been able to dump large objects just fine since 8.1 or so.
I don't know what else an Oracle user might be expecting that we don't
have, though, so I'm hesitant to change the text.
regards, tom lane
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
> The following statement is excerpted from the
> "Oracle to Postgres Conversion"
> https://wiki.postgresql.org/wiki/Oracle_to_Postgres_Conversion#Grammar_Differences
My goodness, that's out of date (as you should have guessed from the
reference to "scheduled for an overhaul in summer 2000"). pg_dump
has been able to dump large objects just fine since 8.1 or so.
I don't know what else an Oracle user might be expecting that we don't
have, though, so I'm hesitant to change the text.
regards, tom lane
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
På mandag 11. januar 2016 kl. 01:37:52, skrev Eugene Yin <eugeneymail@ymail.com>:
I use the BLOB in an Oracle table to store IMG and document files. Now for Postgres(9.4.5), I have two options, i.e., BYTEA or OID.With consideration of passing the params (SAVING) from the Java side as follows:DiskFileItemDeepy file = myFile; InputStream is = null; long fileSize = 0; if (file != null && file.getFileSize() > 0){ is = file.getInputStream(); fileSize = file.getFileSize(); call.setBinaryStream(1, (InputStream)is, (long)fileSize);}...call.execute();//When retrieve the data use:java.sql.Blob blob = (Blob) resultSet.getBlob(tableColumnName);For the purpose mentioned above, which Postgres data type is a better candidate for replacement the BLOB, BYTEA or OID?
From my experience, always use OID for BLOBs, and use the pgjdbc-ng JDBC-driver here: https://github.com/impossibl/pgjdbc-ng
Maven-config:
<properties> <version.pgjdbc-ng>0.6</version.pgjdbc-ng>
</properties>
<dependency> <groupId>com.impossibl.pgjdbc-ng</groupId> <artifactId>pgjdbc-ng</artifactId> <version>${version.pgjdbc-ng}</version> <classifier>complete</classifier>
</dependency>
In the connection-URL use blobtype=oid:
datasource.url=jdbc:pgsql://localhost:5432/andreak?blob.type=oid
This is the only (as I know of) combination which lets you work with true streams all the way down to PG. This way you can work with very large images/movies/documents without sacrificing memory.
The official JDBC-driver for PG doesn't support BLOBs proparly, no getBlob/createBlob (among other things, like custom type mappings).
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
QUOTE:
Maven-config:
<properties> <version.pgjdbc-ng>0.6</version.pgjdbc-ng>
</properties>
<dependency> <groupId>com.impossibl.pgjdbc-ng</groupId> <artifactId>pgjdbc-ng</artifactId> <version>${version.pgjdbc-ng}</version> <classifier>complete</classifier>
</dependency>
I do not use Maven.
I use web.xml and standalone-ha.xml of JBoss AS 7.1.1 to configure the JDBC, such as
[web.xml]
<resource-ref>
<description>Resource reference to my database</description>
<res-ref-name>jdbc/web</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Application</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
[standalone-ha.xml]
<datasource jta="false" jndi-name="java:/jdbc/web" pool-name="OracleDS" enabled="true" use-ccm="false">
<connection-url>jdbc:oracle:thin:@192.168.1.20:1521:deepy</connection-url>
<driver-class>oracle.jdbc.OracleDriver</driver-class>
<driver>OracleJDBCDriver</driver>
<security>
<security-domain>mysecuritydomain</security-domain>
</security>
<validation>
<validate-on-match>false</validate-on-match>
<background-validation>false</background-validation>
</validation>
<statement>
<share-prepared-statements>false</share-prepared-statements>
</statement>
</datasource>
What corresponding changes I need to make to use the Postgres?
Thanks
Eugene
On Monday, January 11, 2016 2:10 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På mandag 11. januar 2016 kl. 01:37:52, skrev Eugene Yin <eugeneymail@ymail.com>:
I use the BLOB in an Oracle table to store IMG and document files. Now for Postgres(9.4.5), I have two options, i.e., BYTEA or OID.With consideration of passing the params (SAVING) from the Java side as follows:DiskFileItemDeepy file = myFile; InputStream is = null; long fileSize = 0; if (file != null && file.getFileSize() > 0){ is = file.getInputStream(); fileSize = file.getFileSize(); call.setBinaryStream(1, (InputStream)is, (long)fileSize);}...call.execute();//When retrieve the data use:java.sql.Blob blob = (Blob) resultSet.getBlob(tableColumnName);For the purpose mentioned above, which Postgres data type is a better candidate for replacement the BLOB, BYTEA or OID?
From my experience, always use OID for BLOBs, and use the pgjdbc-ng JDBC-driver here: https://github.com/impossibl/pgjdbc-ng
Maven-config:
<properties> <version.pgjdbc-ng>0.6</version.pgjdbc-ng>
</properties>
<dependency> <groupId>com.impossibl.pgjdbc-ng</groupId> <artifactId>pgjdbc-ng</artifactId> <version>${version.pgjdbc-ng}</version> <classifier>complete</classifier>
</dependency>
In the connection-URL use blobtype=oid:
datasource.url=jdbc:pgsql://localhost:5432/andreak?blob.type=oid
This is the only (as I know of) combination which lets you work with true streams all the way down to PG. This way you can work with very large images/movies/documents without sacrificing memory.
The official JDBC-driver for PG doesn't support BLOBs proparly, no getBlob/createBlob (among other things, like custom type mappings).
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
På mandag 11. januar 2016 kl. 16:37:50, skrev Eugene Yin <eugeneymail@ymail.com>:
QUOTE:Maven-config:<properties> <version.pgjdbc-ng>0.6</version.pgjdbc-ng> </properties><dependency> <groupId>com.impossibl.pgjdbc-ng</groupId> <artifactId>pgjdbc-ng</artifactId> <version>${version.pgjdbc-ng}</version> <classifier>complete</classifier> </dependency>I do not use Maven.I use web.xml and standalone-ha.xml of JBoss AS 7.1.1 to configure the JDBC, such as[web.xml]<resource-ref><description>Resource reference to my database</description><res-ref-name>jdbc/web</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Application</res-auth><res-sharing-scope>Shareable</res-sharing-scope></resource-ref>[standalone-ha.xml]<datasource jta="false" jndi-name="java:/jdbc/web" pool-name="OracleDS" enabled="true" use-ccm="false"><connection-url>jdbc:oracle:thin:@192.168.1.20:1521:deepy</connection-url><driver-class>oracle.jdbc.OracleDriver</driver-class><driver>OracleJDBCDriver</driver><security><security-domain>mysecuritydomain</security-domain></security><validation><validate-on-match>false</validate-on-match><background-validation>false</background-validation></validation><statement><share-prepared-statements>false</share-prepared-statements></statement></datasource>What corresponding changes I need to make to use the Postgres?
Using Maven is only for getting the right deps in place for your app, it has nothing to do with configuration.
The URL should be on the form:
jdbc:pgsql://<host>:<port>/<database_name>?blob.type=oidOther parameters are available here: http://impossibl.github.io/pgjdbc-ng/
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
I did some search on the OID data type. Here is something I found regarding to the deletion of the OID data.
QUOTE:
"The Large Object method for storing binary data is better suited to storing very large values, but it has its own limitations. Specifically deleting a row that contains a Large Object reference does not delete the Large Object.
Deleting the Large Object is a separate operation that needs to be performed.
Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object, even if they don't have permissions to view/update the row containing the Large Object reference."
So I have two questions:
1) If it is true that "Deleting the Large Object is a separate operation that needs to be performed.", after the deletion, what operation I need to perform, in order to delete the OID data in the table? Possiblely put into an after trigger
2) "Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object". Will this pose a real risk to the security? or just a forethought?
On Monday, January 11, 2016 9:49 AM, Andreas Joseph Krogh <andreas@visena.com> wrote:
På mandag 11. januar 2016 kl. 16:37:50, skrev Eugene Yin <eugeneymail@ymail.com>:
QUOTE:Maven-config:<properties> <version.pgjdbc-ng>0.6</version.pgjdbc-ng> </properties><dependency> <groupId>com.impossibl.pgjdbc-ng</groupId> <artifactId>pgjdbc-ng</artifactId> <version>${version.pgjdbc-ng}</version> <classifier>complete</classifier> </dependency>I do not use Maven.I use web.xml and standalone-ha.xml of JBoss AS 7.1.1 to configure the JDBC, such as[web.xml]<resource-ref><description>Resource reference to my database</description><res-ref-name>jdbc/web</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Application</res-auth><res-sharing-scope>Shareable</res-sharing-scope></resource-ref>[standalone-ha.xml]<datasource jta="false" jndi-name="java:/jdbc/web" pool-name="OracleDS" enabled="true" use-ccm="false"><connection-url>jdbc:oracle:thin:@192.168.1.20:1521:deepy</connection-url><driver-class>oracle.jdbc.OracleDriver</driver-class><driver>OracleJDBCDriver</driver><security><security-domain>mysecuritydomain</security-domain></security><validation><validate-on-match>false</validate-on-match><background-validation>false</background-validation></validation><statement><share-prepared-statements>false</share-prepared-statements></statement></datasource>What corresponding changes I need to make to use the Postgres?
Using Maven is only for getting the right deps in place for your app, it has nothing to do with configuration.
The URL should be on the form:
jdbc:pgsql://<host>:<port>/<database_name>?blob.type=oidOther parameters are available here: http://impossibl.github.io/pgjdbc-ng/
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
På tirsdag 12. januar 2016 kl. 02:32:46, skrev Eugene Yin <eugeneymail@ymail.com>:
I did some search on the OID data type. Here is something I found regarding to the deletion of the OID data.QUOTE:"The Large Object method for storing binary data is better suited to storing very large values, but it has its own limitations. Specifically deleting a row that contains a Large Object reference does not delete the Large Object.Deleting the Large Object is a separate operation that needs to be performed.Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object, even if they don't have permissions to view/update the row containing the Large Object reference."So I have two questions:1) If it is true that "Deleting the Large Object is a separate operation that needs to be performed.", after the deletion, what operation I need to perform, in order to delete the OID data in the table? Possiblely put into an after trigger2) "Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object". Will this pose a real risk to the security? or just a forethought?
1) You don't need to perform any "after delete"-operation as a developer. But the DBA (or someone else) has to execute vacuumlo (see "man vacuumlo" for more info) using cron or some other periodic scheduling tool.
2) Your mileage may vary, but for our app this isn't an issue.
PS: 8.4 is EOL, use a more current version, preferably 9.5.
I also recommend the -ng driver as it's the only one with proper BLOB-support, as mentioned earlier in this thread.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
BLOB binary large object see Large Object Support
- Minuses
- must use different interface from what is normally used to access BLOBs.
- Need to track OID. Normally a separate table with additional meta data is used to describe what each OID is.
- (8.4 and <8.4) No access controls in database.
- Sometimes advised against (basically you only need them if your entry is so large you need/want to seek and read bits and pieces of it at a time).
Do one really:- Need to track OID. Normally a separate table with additional meta data is used to describe what each OID is.
ThanksEugeneOn Monday, January 11, 2016 11:10 PM, Andreas Joseph Krogh <andreas@visena.com> wrote:På tirsdag 12. januar 2016 kl. 02:32:46, skrev Eugene Yin <eugeneymail@ymail.com>:I did some search on the OID data type. Here is something I found regarding to the deletion of the OID data.QUOTE:"The Large Object method for storing binary data is better suited to storing very large values, but it has its own limitations. Specifically deleting a row that contains a Large Object reference does not delete the Large Object.Deleting the Large Object is a separate operation that needs to be performed.Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object, even if they don't have permissions to view/update the row containing the Large Object reference."So I have two questions:1) If it is true that "Deleting the Large Object is a separate operation that needs to be performed.", after the deletion, what operation I need to perform, in order to delete the OID data in the table? Possiblely put into an after trigger2) "Large Objects also have some security issues since anyone connected to the database can view and/or modify any Large Object". Will this pose a real risk to the security? or just a forethought?1) You don't need to perform any "after delete"-operation as a developer. But the DBA (or someone else) has to execute vacuumlo (see "man vacuumlo" for more info) using cron or some other periodic scheduling tool.2) Your mileage may vary, but for our app this isn't an issue.PS: 8.4 is EOL, use a more current version, preferably 9.5.I also recommend the -ng driver as it's the only one with proper BLOB-support, as mentioned earlier in this thread.--Andreas Joseph KroghCTO / Partner - Visena ASMobile: +47 909 56 963
Вложения
På søndag 17. januar 2016 kl. 20:26:34, skrev Eugene Yin <eugeneymail@ymail.com>:
BLOB binary large object see Large Object Support
- Minuses
- must use different interface from what is normally used to access BLOBs.
- Need to track OID. Normally a separate table with additional meta data is used to describe what each OID is.
- (8.4 and <8.4) No access controls in database.
- Sometimes advised against (basically you only need them if your entry is so large you need/want to seek and read bits and pieces of it at a time).
Do one really:
- Need to track OID. Normally a separate table with additional meta data is used to describe what each OID is.
Using BLOBs and PG's OIDs is really simple if you use the pgjdbc-ng driver.
You column has to be defined as type=OID, like this
CREATE TABLE my_stuff(
id SERIAL PRIMARY KEY,
data OID
);
Then use the JDBC BLOB-interface as defined.
You don't need to do anything else except running the 'vacuumlo' system-command (not SQL-command) once in a while, preferably using 'cron' or some other scheduling-tool.
Note that "everything" you read about PG and JDBC and BLOB will describe the official JDBC-driver, which doesn't even support BLOBs. So if you just get by that and use the pgjdbc-ng driver your life will be much simpler.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Вложения
Andreas Joseph Krogh schrieb am 17.01.2016 um 23:08: > will describe the official JDBC-driver which doesn't even support BLOBs. This is not true. using set/getBinaryStream() or set/getBytes() works just fine
På søndag 17. januar 2016 kl. 23:17:29, skrev Thomas Kellerer <spam_eater@gmx.net>:
Andreas Joseph Krogh schrieb am 17.01.2016 um 23:08:
> will describe the official JDBC-driver which doesn't even support BLOBs.
This is not true.
using set/getBinaryStream() or set/getBytes() works just fine
Just fine for you maybe, but it doesn't work for people working with large data (think GB) and who want streams. Se my previous reply.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963







