Обсуждение: PostgreSQL Columnar Store for Analytic Workloads
<div dir="ltr"><div style="font-family:arial,sans-serif;font-size:13px">Dear Hackers,<br /></div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Weat Citus Data have been developing a columnar store extension for PostgreSQL.Today we are excited to open source it under the Apache v2.0 license.</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Thiscolumnar store extension uses the Optimized Row Columnar (ORC) formatfor its data layout, which improves upon the RCFile format developed at Facebook, and brings the following benefits:</div><divstyle="font-family:arial,sans-serif;font-size:13px"><br /></div><div style="font-family:arial,sans-serif;font-size:13px">*Compression: Reduces in-memory and on-disk data size by 2-4x. Can beextended to support different codecs. We used the functions in pg_lzcompress.h for compression and decompression.</div><divstyle="font-family:arial,sans-serif;font-size:13px">* Column projections: Only reads column datarelevant to the query. Improves performance for I/O bound queries.</div><div style="font-family:arial,sans-serif;font-size:13px">* Skip indexes: Stores min/max statistics for row groups, and uses themto skip over unrelated rows.</div><div style="font-family:arial,sans-serif;font-size:13px"><br /></div><div style="font-family:arial,sans-serif;font-size:13px">We used the PostgreSQL FDW APIs to make this work. The extension doesn'timplement the writable FDW API, but it uses the process utility hook to enable COPY command for the columnar tables.</div><divstyle="font-family:arial,sans-serif;font-size:13px"><br /></div><div style="font-family:arial,sans-serif;font-size:13px">Thisextension uses PostgreSQL's internal data type representation tostore data in the table, so this columnar store should support all data types that PostgreSQL supports.</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Wetried the extension on TPC-H benchmark with 4GB scale factor on a m1.xlargeAmazon EC2 instance, and the query performance improved by 2x-3x compared to regular PostgreSQL table. Note thatwe flushed the page cache before each test to see the impact on disk I/O.</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Whendata is cached in memory, the performance of cstore_fdw tables wereclose to the performance of regular PostgreSQL tables.</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Formore information, please visit:</div><div style="font-family:arial,sans-serif;font-size:13px"> * our blog post: <a href="http://citusdata.com/blog/76-postgresql-columnar-store-for-analytics" target="_blank">http://citusdata.com/blog/76-postgresql-columnar-store-for-analytics</a></div><div style="font-family:arial,sans-serif;font-size:13px"> * our github page: <a href="https://github.com/citusdata/cstore_fdw"target="_blank">https://github.com/citusdata/cstore_fdw</a></div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Feedback from you is really appreciated.</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div><div style="font-family:arial,sans-serif;font-size:13px">Thanks,</div><div style="font-family:arial,sans-serif;font-size:13px"> -- Hadi</div><div style="font-family:arial,sans-serif;font-size:13px"><br/></div></div>
Hi Hadi
Do you think that cstore_fdw is also welll suited for storing and retrieving linked data (RDF)?
-S.
2014-04-03 18:43 GMT+02:00 Hadi Moshayedi <hadi@citusdata.com>:
Dear Hackers,We at Citus Data have been developing a columnar store extension for PostgreSQL. Today we are excited to open source it under the Apache v2.0 license.This columnar store extension uses the Optimized Row Columnar (ORC) format for its data layout, which improves upon the RCFile format developed at Facebook, and brings the following benefits:* Compression: Reduces in-memory and on-disk data size by 2-4x. Can be extended to support different codecs. We used the functions in pg_lzcompress.h for compression and decompression.* Column projections: Only reads column data relevant to the query. Improves performance for I/O bound queries.* Skip indexes: Stores min/max statistics for row groups, and uses them to skip over unrelated rows.We used the PostgreSQL FDW APIs to make this work. The extension doesn't implement the writable FDW API, but it uses the process utility hook to enable COPY command for the columnar tables.This extension uses PostgreSQL's internal data type representation to store data in the table, so this columnar store should support all data types that PostgreSQL supports.We tried the extension on TPC-H benchmark with 4GB scale factor on a m1.xlarge Amazon EC2 instance, and the query performance improved by 2x-3x compared to regular PostgreSQL table. Note that we flushed the page cache before each test to see the impact on disk I/O.When data is cached in memory, the performance of cstore_fdw tables were close to the performance of regular PostgreSQL tables.For more information, please visit:* our blog post: http://citusdata.com/blog/76-postgresql-columnar-store-for-analytics* our github page: https://github.com/citusdata/cstore_fdwFeedback from you is really appreciated.Thanks,-- Hadi
Hi Stefan,
On Tue, Apr 8, 2014 at 9:28 AM, Stefan Keller <sfkeller@gmail.com> wrote:
Hi HadiDo you think that cstore_fdw is also welll suited for storing and retrieving linked data (RDF)?
I am not very familiar with RDF. Note that cstore_fdw doesn't change the query language of PostgreSQL, so if your queries are expressible in SQL, they can be answered using cstore_fdw too. If your data is huge and doesn't fit in memory, then using cstore_fdw can be beneficial for you.
Can you give some more information about your use case? For example, what are some of your queries? do you have sample data? how much memory do you have? how large is the data?
-- Hadi