Обсуждение: ANN: EMS PostgreSQL Utils released!

Поиск
Список
Период
Сортировка

ANN: EMS PostgreSQL Utils released!

От
dkovt@ems-hitech.com (Danya Kovtunowitch)
Дата:
Dear Sirs and Madams,

EMS HiTech Company is very glad to announce the NEW software – EMS
PostgreSQL Utils – powerful data management utilities for PostgreSQL
Server.

Currently PostgreSQL Utils include PostgreSQL Export - a powerful tool
for PostgreSQL data export, and PostgreSQL Import - an utility for
quick importing data to PostgreSQL tables.

EMS PostgreSQL Export is a cross-platform (Windows and Linux) product
to export your data quickly from PostgreSQL databases to any of 12
available formats, including MS Excel, MS Word, HTML, TXT, and more.
PostgreSQL Export includes a wizard, which allows you to set export
options for each table visually (destination filename, exported
fields, data formats, and many more) and a command line utility to
export data from tables and queries in one-touch.

EMS PostrgeSQL Import is a powerful cross-platform (Windows and Linux)
tool to import your data quickly from MS Excel, MS Access, DBF, TXT
and CSV files to PostgreSQL tables. PostgreSQL Import includes a
wizard, which allows you to set all the import options for different
files visually, and a command-line utility to execute import in
one-touch.

You can download these produtcs (both Windows and Linux versions) from
http://www.ems-hitech.com/pgsqlutils/download.phtml.

Main PostgreSQL Export features (for both platforms):
- Export to 12 most popular formats: MS Excel, MS Word (RTF), HTML,
XML, TXT, DBF, CSV, SYLK, DIF, LaTeX, SQL and Clipboard
- All PostgreSQL data types supported
- Export from tables selected from different databases on one host 
- Adjustable parameters for each exported table, including: type and
name of the result file, fields for export, data formats for all
exported fields or for certain fields only, header and footer of the
result file, column captions, width and alignment, and specific
parameters for export to MS Excel, MS Word (RTF), HTML, XML, TXT, CSV
and SQL
- Possibility of saving/restoring export parameters to/from a template
file
- Command line utility to export data using the configuration file,
which contains connection options, tables and queries, and template
files with all the export options
- And more. Please find details and screenshots at 
http://www.ems-hitech.com/pgsqlutils

Main PostgreSQL Import features:
- Import from most popular data formats: MS Excel, MS Access, DBF, TXT
and CSV.
- Windows and Linux support
- Importing into tables, selected from different databases on one host
- Easy-to-use wizard interface
- Adjustable import parameters, including: source data formats for all
the fields and destination data formats for the selected fields,
commit options, number of records to skip, etc.
- Possibility of saving all the import parameters, used on current
wizard session, to the template file for future using with the wizard
or with the command-line utility
- Command line-utility to import data quickly using the template file
- And more. Please find details and screenshots at 
http://www.ems-hitech.com/pgsqlutils

We hope you will like our new software. Thanks for your attention.

Best Regards,
EMS HiTech Development Team


Handling data in libpq

От
am@fx.ro
Дата:
Hello everyone!

I have written a C program that:
- displays in a tabular form the data in a table
- allows inserts,deletes,updates, and user-defined filters('enter a query')
- uses libpq and ncurses.

The program is rather inefficient right now : 
- In order to fetch the data, i call:      RecordSet = PQexec(conn,"select FIELD1,...,FIELDn,OID from TBL");
nRecords = PQntuples(RecordSet);
 

- when i need a specific tuple, i call :    PQgetvalue(RecordSet,crtRecord,field);   for each field


I'm looking for a new approach regarding fetching the data.
Usage of cursors is probably the path to follow. I would be deeply 
grateful to anyone who would share a bit of his/her experience:

- i assume the table has 50000 tuples
- what is better:  to fetch one tuple at one time, or to fetch a certain number of tuples (let's say 100).
- if i fetch 100 tuples, PQntuples returns 100? how do i know how many rows has the table?  
- i have the impression that keeping track of the tuple's index i nside the RecordSet and the index inside the whole
tableis quite  tricky, but manageable. How do you guys handle this kind of situations?
 
- are there more examples that use cursors, except the ones in docs ( in the libpq chapter )?    
Thanks,
Adrian Maier
(am@fx.ro)                                   


Re: Handling data in libpq

От
"Jeroen T. Vermeulen"
Дата:
On Sat, Oct 19, 2002 at 01:23:52PM +0300, am@fx.ro wrote:

> I'm looking for a new approach regarding fetching the data.
> Usage of cursors is probably the path to follow. I would be deeply 
> grateful to anyone who would share a bit of his/her experience:
> 
> - i assume the table has 50000 tuples
> - what is better:  to fetch one tuple at one time, or to fetch
>   a certain number of tuples (let's say 100).

Obviously there is some query and communications overhead associated with
fetching anything, regardless of size, and the odds are it's going to be 
larger than the cost of sending a tuple of actual data across.  So if
you're going to pay that overhead anyway, you'll want to get some decent
amount of data for it.

However, that also implies that fetching 500 blocks of 100 tuples each is 
always going to be slower than getting 50000 tuples in one go.  The two
advantages of fetching smaller blocks are: (1) you can start processing
some data right away, which might be useful for e.g. user interaction
(or you can use asynchronous communication so you can process your last
block of data while fetching the next); and (2) you may be running out of
memory when fetching and processing 50000 tuples at once.  Swapping can
be a real drag on performance.


> - if i fetch 100 tuples, PQntuples returns 100? how do i know how
>   many rows has the table?  

Well, if you assume your table has 50000 rows, why do you need to ask?

Seriously though, you don't get this information.  But if an estimate
is good enough, you can "select count(*) from table" before you start
fetching data.


> - i have the impression that keeping track of the tuple's index i
>   nside the RecordSet and the index inside the whole table is quite 
>   tricky, but manageable. How do you guys handle this kind of situations?

It's not so hard.  Just an extra nested loop.


Jeroen



Re: Handling data in libpq

От
am@fx.ro
Дата:
First of all, thanks for your answer.

On Sat, Oct 19, 2002 at 01:01:36PM +0200, Jeroen T. Vermeulen wrote:
> On Sat, Oct 19, 2002 at 01:23:52PM +0300, am@fx.ro wrote:
> 
> > I'm looking for a new approach regarding fetching the data.
> > Usage of cursors is probably the path to follow. I would be deeply 
> > grateful to anyone who would share a bit of his/her experience:
> > 
> > - i assume the table has 50000 tuples
> > - what is better:  to fetch one tuple at one time, or to fetch
> >   a certain number of tuples (let's say 100).
> 
> Obviously there is some query and communications overhead associated with
> fetching anything, regardless of size, and the odds are it's going to be 
> larger than the cost of sending a tuple of actual data across.  So if
> you're going to pay that overhead anyway, you'll want to get some decent
> amount of data for it.

Hmm.. Good point.  
So, you're saying that fetching one tuple at one time is out of discussion. 

> However, that also implies that fetching 500 blocks of 100 tuples each is 
> always going to be slower than getting 50000 tuples in one go. The two
> advantages of fetching smaller blocks are: (1) you can start processing
> some data right away, which might be useful for e.g. user interaction
> (or you can use asynchronous communication so you can process your last
> block of data while fetching the next); 

Well, the program doesn't do automatically data processing: 
it only allows the user do see and modify the data.

>and (2) you may be running out of
> memory when fetching and processing 50000 tuples at once.  Swapping can
> be a real drag on performance.

There is also another aspect: usually the user will not
browse all of those tuples ( the program allows the user to specify
custom WHERE and ORDER BY clauses, so that he/she can select only 
the interesting rows ; and only sometimes will scroll all tuples ).

The conclusion is that the number of rows fetched at one time 
should be chosen carefully. 

> > - if i fetch 100 tuples, PQntuples returns 100? how do i know how
> >   many rows has the table?  
> 
> Well, if you assume your table has 50000 rows, why do you need to ask?

This assumption is only for that table. I want that my program
works well even for larger tables ( most of the program's classes
and functions are independent from the actual table, and will be 
moved to a library for later use).

> Seriously though, you don't get this information.  But if an estimate
> is good enough, you can "select count(*) from table" before you start
> fetching data.

I see: the returned number is exact as long as another user doesn't 
add/delete tuples after the 'select count(*)' and before the fetch.
I thought there was a way to find out how many tuples would return
a FETCH ALL, without executing that command.
> > - i have the impression that keeping track of the tuple's index i
> >   nside the RecordSet and the index inside the whole table is quite 
> >   tricky, but manageable. How do you guys handle this kind of situations?
> 
> It's not so hard.  Just an extra nested loop.

Adrian Maier
(am@fx.ro)


Re: Handling data in libpq

От
"John L. Turner"
Дата:
On Saturday 19 October 2002 11:47, am@fx.ro wrote:
> First of all, thanks for your answer.
>
> On Sat, Oct 19, 2002 at 01:01:36PM +0200, Jeroen T. Vermeulen wrote:
> > On Sat, Oct 19, 2002 at 01:23:52PM +0300, am@fx.ro wrote:
> > > I'm looking for a new approach regarding fetching the data.
> > > Usage of cursors is probably the path to follow. I would be deeply
> > > grateful to anyone who would share a bit of his/her experience:
> > >
> > > - i assume the table has 50000 tuples
> > > - what is better:  to fetch one tuple at one time, or to fetch
> > >   a certain number of tuples (let's say 100).
> >
> > Obviously there is some query and communications overhead associated with
> > fetching anything, regardless of size, and the odds are it's going to be
> > larger than the cost of sending a tuple of actual data across.  So if
> > you're going to pay that overhead anyway, you'll want to get some decent
> > amount of data for it.
>
> Hmm.. Good point.
> So, you're saying that fetching one tuple at one time is out of discussion.
>
> > However, that also implies that fetching 500 blocks of 100 tuples each is
> > always going to be slower than getting 50000 tuples in one go. The two
> > advantages of fetching smaller blocks are: (1) you can start processing
> > some data right away, which might be useful for e.g. user interaction
> > (or you can use asynchronous communication so you can process your last
> > block of data while fetching the next);
>
> Well, the program doesn't do automatically data processing:
> it only allows the user do see and modify the data.
>
> >and (2) you may be running out of
> > memory when fetching and processing 50000 tuples at once.  Swapping can
> > be a real drag on performance.
>
> There is also another aspect: usually the user will not
> browse all of those tuples ( the program allows the user to specify
> custom WHERE and ORDER BY clauses, so that he/she can select only
> the interesting rows ; and only sometimes will scroll all tuples ).
>
> The conclusion is that the number of rows fetched at one time
> should be chosen carefully.
>
> > > - if i fetch 100 tuples, PQntuples returns 100? how do i know how
> > >   many rows has the table?
> >
> > Well, if you assume your table has 50000 rows, why do you need to ask?
>
> This assumption is only for that table. I want that my program
> works well even for larger tables ( most of the program's classes
> and functions are independent from the actual table, and will be
> moved to a library for later use).
>
> > Seriously though, you don't get this information.  But if an estimate
> > is good enough, you can "select count(*) from table" before you start
> > fetching data.
>
> I see: the returned number is exact as long as another user doesn't
> add/delete tuples after the 'select count(*)' and before the fetch.
>
> I thought there was a way to find out how many tuples would return
> a FETCH ALL, without executing that command.
>
> > > - i have the impression that keeping track of the tuple's index i
> > >   nside the RecordSet and the index inside the whole table is quite
> > >   tricky, but manageable. How do you guys handle this kind of
> > > situations?
> >
> > It's not so hard.  Just an extra nested loop.
>
> Adrian Maier

Another way to look at it, is to create a TreeView, showing only part of the
data

We do that for both our Customer's and  Products Tables

First, select all the First Letters in a Customers Table's Last Name Field
Show the User That list of Characters [ A ... Z ]

Then when the users Clicks (Selects) the Character, only those Last Names With
the Selected Character are fetched, and added to Treeview display

Ditto for Products, only they are Grouped by a Category, then any number of
Sub Categories, until the small group of products are shown

After the TreeView item is selected, the focus is changed to the second page
of a Tab Form to allow each field to be edited
--
John Turner
JCI Inc.
http://home.ntelos.net/~JLT
"Just because you do not know the answer
does not mean that someone else does"
Stephen J. Gould, {rip}


Re: Handling data in libpq

От
am@fx.ro
Дата:
On Sat, Oct 19, 2002 at 10:48:04AM -0400, John L. Turner wrote:
> Another way to look at it, is to create a TreeView, showing only part of the 
> data
> 
> We do that for both our Customer's and  Products Tables
> First, select all the First Letters in a Customers Table's Last Name Field
> Show the User That list of Characters [ A ... Z ] 
> 
>Then when the users Clicks (Selects) the Character, only those Last Names With 
> the Selected Character are fetched, and added to Treeview display
> Ditto for Products, only they are Grouped by a Category, then any number of  
> Sub Categories, until the small group of products are shown
> 
> After the TreeView item is selected, the focus is changed to the second page 
> of a Tab Form to allow each field to be edited

This seems to be a useful strategy. But my program is ncurses-based ,
so i have no TreeViews. I hope i'll find a way to adapt somehow
your idea of selecting first a group of tuples based on some criteria
like the first letter.

Thanks,
Adrian Maier
(am@fx.ro)