Обсуждение: ODBC Driver Failure- MS Access- Large record volume

Поиск
Список
Период
Сортировка

ODBC Driver Failure- MS Access- Large record volume

От
Robin Weber
Дата:
Hello Community,

I'm new to this list.

I am a user of the Postgres ODBC driver, mainly under MS Access and
Matlab.  We deal in very large tabular datasets.  (My largest table is
currently 72 million records, and it will grow several times that size
before the year is out.

The driver fails after several million record inserts (under access).  Once
failed (with nothing more helpful tan an "ODBC call failed" message box),
it will process no further requests.  (I believe it will also fail after
receiving that much data, as well a sending.)

(I would think of a memory leak of some sort, but in my 27 pro years of
programming, I have never dealt with the inner workings of an ODBC driver,
and I don't know how to get driver error information.  This is also my
first open source experience, so please forgive my naivete.)

I would like to figure out how to fix this problem.  I'm willing to help
debug, but I need help getting started.  How do I go about trying to
understand this problem?

One thing that might be helpful:  Is there a programmatic way to get Access
to close and unload, then reopen the ODBC driver without actually quitting
Access?  I notice that when I start Access "fresh" I can pump another
several million records through before failing.  If I can make my programs
unload then reload the driver every few million records, it might make a
good workaround for my problem.  Any clues out there?

With thanks,
Robin



--------------------------------------------------
Robin Weber
Data Manager and Application Developer for
Marine Mammals and Birds
Tagging of Pacific Pelagics Project
University of California Santa Cruz
Long Marine Lab - Center for Ocean Health
100 Shaffer Road
Santa Cruz, CA 95060-5730
(831) 459-2691




Re: ODBC Driver Failure- MS Access- Large record volume

От
"Greg Campbell"
Дата:
Hello and welcome Robin,
I find the key to getting help is giving enough information.

You might answer a few questions.
1. What version of the ODBC driver are you using? (I found my  version using REGEDIT to key
HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
2. What version of Access are you using?
3. What Access library are you using -DAO or ADO?
4. Since Access is typically interactive, how are you transacting these millions of records, reading a
file or is some external device driving Access?
5. Are you connecting though the Access objects (normal queries and tables), or using pass-through?
6. What version of Postgres server are you using? Linux or Windows?
7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)


By the way other list memebers, after installation is there a recommended way to determine the pgODBC
driver version?

Robin,
Before knowing any of these answers I can say
the  ADO Connection object is your means of "opening and closing" the ODBC (or OLE/DB) driver. Open and
close often is the rule of thumb for interacting with server database. There is typically caching going on
for recent connections, so the opening a new connection is not the resource hog it might seem. If all that
is ok, you may indeed have a pgODBC  driver version with a leak.

And as a complete aside because it has bitten me too many times, set the Jet ConnectionTimeout under
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0\ to 0 (never timeout). A jet connection
(linked table) will loose connectivity after a default time of 60 minutes of inactivity unless this key is
set to some other value (longer or shorter) or 0.

Refreshing linked tables is also do-able. You should be able to google it or write back and someone will
no doubt expound upon it. The Access (version) Handbook's from Sybex are excellent references for the
mundane to the arcane.



Robin Weber wrote:

> Hello Community,
>
> I'm new to this list.
>
> I am a user of the Postgres ODBC driver, mainly under MS Access and
> Matlab.  We deal in very large tabular datasets.  (My largest table is
> currently 72 million records, and it will grow several times that size
> before the year is out.
>
> The driver fails after several million record inserts (under access).
> Once failed (with nothing more helpful tan an "ODBC call failed" message
> box), it will process no further requests.  (I believe it will also fail
> after receiving that much data, as well a sending.)
>
> (I would think of a memory leak of some sort, but in my 27 pro years of
> programming, I have never dealt with the inner workings of an ODBC
> driver, and I don't know how to get driver error information.  This is
> also my first open source experience, so please forgive my naivete.)
>
> I would like to figure out how to fix this problem.  I'm willing to help
> debug, but I need help getting started.  How do I go about trying to
> understand this problem?
>
> One thing that might be helpful:  Is there a programmatic way to get
> Access to close and unload, then reopen the ODBC driver without actually
> quitting Access?  I notice that when I start Access "fresh" I can pump
> another several million records through before failing.  If I can make
> my programs unload then reload the driver every few million records, it
> might make a good workaround for my problem.  Any clues out there?
>
> With thanks,
> Robin
>
>
>
> --------------------------------------------------
> Robin Weber
> Data Manager and Application Developer for
> Marine Mammals and Birds
> Tagging of Pacific Pelagics Project
> University of California Santa Cruz
> Long Marine Lab - Center for Ocean Health
> 100 Shaffer Road
> Santa Cruz, CA 95060-5730
> (831) 459-2691
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match

Вложения

Re: ODBC Driver Failure- MS Access- Large record volume

От
"Dave Page"
Дата:
-----Original Message-----
From: "Greg Campbell"<greg.campbell@us.michelin.com>
Sent: 08/09/05 22:36:16
To: "Robin Weber"<rweber@pmc.ucsc.edu>
Cc: "pgsql-odbc@postgresql.org"<pgsql-odbc@postgresql.org>
Subject: Re: [ODBC] ODBC Driver Failure- MS Access- Large record volume

> By the way other list memebers, after
> installation is there a recommended way to
> determine the pgODBC driver version?

Look at the version column on the Drivers tab in the driver manager.

FWIW, there have been a number of leaks fixed in the recent libpq snapshots. It would be worth trying 08.01.0003 (which
hasa temporary name, so DSN changes will be required from non-libpq versions). 

Regards, Dave

Regards, Dave

-----Unmodified Original Message-----
Hello and welcome Robin,
I find the key to getting help is giving enough information.

You might answer a few questions.
1. What version of the ODBC driver are you using? (I found my  version using REGEDIT to key
HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
2. What version of Access are you using?
3. What Access library are you using -DAO or ADO?
4. Since Access is typically interactive, how are you transacting these millions of records, reading a
file or is some external device driving Access?
5. Are you connecting though the Access objects (normal queries and tables), or using pass-through?
6. What version of Postgres server are you using? Linux or Windows?
7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)


By the way other list memebers, after installation is there a recommended way to determine the pgODBC
driver version?

Robin,
Before knowing any of these answers I can say
the  ADO Connection object is your means of "opening and closing" the ODBC (or OLE/DB) driver. Open and
close often is the rule of thumb for interacting with server database. There is typically caching going on
for recent connections, so the opening a new connection is not the resource hog it might seem. If all that
is ok, you may indeed have a pgODBC  driver version with a leak.

And as a complete aside because it has bitten me too many times, set the Jet ConnectionTimeout under
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0\ to 0 (never timeout). A jet connection
(linked table) will loose connectivity after a default time of 60 minutes of inactivity unless this key is
set to some other value (longer or shorter) or 0.

Refreshing linked tables is also do-able. You should be able to google it or write back and someone will
no doubt expound upon it. The Access (version) Handbook's from Sybex are excellent references for the
mundane to the arcane.



Robin Weber wrote:

> Hello Community,
>
> I'm new to this list.
>
> I am a user of the Postgres ODBC driver, mainly under MS Access and
> Matlab.  We deal in very large tabular datasets.  (My largest table is
> currently 72 million records, and it will grow several times that size
> before the year is out.
>
> The driver fails after several million record inserts (under access).
> Once failed (with nothing more helpful tan an "ODBC call failed" message
> box), it will process no further requests.  (I believe it will also fail
> after receiving that much data, as well a sending.)
>
> (I would think of a memory leak of some sort, but in my 27 pro years of
> programming, I have never dealt with the inner workings of an ODBC
> driver, and I don't know how to get driver error information.  This is
> also my first open source experience, so please forgive my naivete.)
>
> I would like to figure out how to fix this problem.  I'm willing to help
> debug, but I need help getting started.  How do I go about trying to
> understand this problem?
>
> One thing that might be helpful:  Is there a programmatic way to get
> Access to close and unload, then reopen the ODBC driver without actually
> quitting Access?  I notice that when I start Access "fresh" I can pump
> another several million records through before failing.  If I can make
> my programs unload then reload the driver every few million records, it
> might make a good workaround for my problem.  Any clues out there?
>
> With thanks,
> Robin
>
>
>
> --------------------------------------------------
> Robin Weber
> Data Manager and Application Developer for
> Marine Mammals and Birds
> Tagging of Pacific Pelagics Project
> University of California Santa Cruz
> Long Marine Lab - Center for Ocean Health
> 100 Shaffer Road
> Santa Cruz, CA 95060-5730
> (831) 459-2691
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match

Re: ODBC Driver Failure- MS Access- Large record volume

От
Robin Weber
Дата:
Hi Greg,

Thanks so much for your response.  It's nice to get a dialog going.  To
answer your questions:

>1. What version of the ODBC driver are you using? (I found my  version
>using REGEDIT to key
>HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)

08.00.0102
I had the same problem running a couple of different versions in the 7.0
series.

>2. What version of Access are you using?

MS Access 2003 SP1

>3. What Access library are you using -DAO or ADO?

Usually DAO, but I've had the same problem using ODBCDirect, repeatedly
executing SQL lines (such as inserts) several million times.

>4. Since Access is typically interactive, how are you transacting these
>millions of records, reading a file or is some external device driving Access?

I'm reading text files, parsing the lines, loading variables, and either
processing AddNew's on a linked table, or doing Inserts in an ODBCDirect
workspace

>5. Are you connecting though the Access objects (normal queries and
>tables), or using pass-through?

Both, as above.

>6. What version of Postgres server are you using? Linux or Windows?

8.0.3 under Windows XP, unfortunately.  (Our lab is Linux-deficient)

>7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)

Not yet.  I will activate it.  Can you give me a hint on what to look for
in the log?

I will experiment with getting a hold of the connection object and closing
and opening it to start a new driver instance.  It takes a long time of
stuffing data through the driver to re-create the crash, so it's hard to
know when I've gotten around the bug or not.

Please let me know if my answers invite any new information.

Oh, also, the link from the Postgres web site has changed, and I'm not sure
where to get new versions of the driver.  Any help with my confusion?

With thanks,
Robin

At 02:29 PM 9/8/2005, you wrote:
>Hello and welcome Robin,
>I find the key to getting help is giving enough information.
>
>You might answer a few questions.
>1. What version of the ODBC driver are you using? (I found my  version
>using REGEDIT to key HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
>2. What version of Access are you using?
>3. What Access library are you using -DAO or ADO?
>4. Since Access is typically interactive, how are you transacting these
>millions of records, reading a file or is some external device driving Access?
>5. Are you connecting though the Access objects (normal queries and
>tables), or using pass-through?
>6. What version of Postgres server are you using? Linux or Windows?
>7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)
>
>
>By the way other list memebers, after installation is there a recommended
>way to determine the pgODBC driver version?
>
>Robin,
>Before knowing any of these answers I can say
>the  ADO Connection object is your means of "opening and closing" the ODBC
>(or OLE/DB) driver. Open and close often is the rule of thumb for
>interacting with server database. There is typically caching going on for
>recent connections, so the opening a new connection is not the resource
>hog it might seem. If all that is ok, you may indeed have a pgODBC  driver
>version with a leak.
>
>And as a complete aside because it has bitten me too many times, set the
>Jet ConnectionTimeout under
>HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0\ to 0 (never
>timeout). A jet connection (linked table) will loose connectivity after a
>default time of 60 minutes of inactivity unless this key is set to some
>other value (longer or shorter) or 0.
>
>Refreshing linked tables is also do-able. You should be able to google it
>or write back and someone will no doubt expound upon it. The Access
>(version) Handbook's from Sybex are excellent references for the mundane
>to the arcane.
>
>
>
>Robin Weber wrote:
>
>>Hello Community,
>>I'm new to this list.
>>I am a user of the Postgres ODBC driver, mainly under MS Access and
>>Matlab.  We deal in very large tabular datasets.  (My largest table is
>>currently 72 million records, and it will grow several times that size
>>before the year is out.
>>The driver fails after several million record inserts (under access).
>>Once failed (with nothing more helpful tan an "ODBC call failed" message
>>box), it will process no further requests.  (I believe it will also fail
>>after receiving that much data, as well a sending.)
>>(I would think of a memory leak of some sort, but in my 27 pro years of
>>programming, I have never dealt with the inner workings of an ODBC
>>driver, and I don't know how to get driver error information.  This is
>>also my first open source experience, so please forgive my naivete.)
>>I would like to figure out how to fix this problem.  I'm willing to help
>>debug, but I need help getting started.  How do I go about trying to
>>understand this problem?
>>One thing that might be helpful:  Is there a programmatic way to get
>>Access to close and unload, then reopen the ODBC driver without actually
>>quitting Access?  I notice that when I start Access "fresh" I can pump
>>another several million records through before failing.  If I can make my
>>programs unload then reload the driver every few million records, it
>>might make a good workaround for my problem.  Any clues out there?
>>With thanks,
>>Robin
>>
>>--------------------------------------------------
>>Robin Weber
>>Data Manager and Application Developer for
>>Marine Mammals and Birds
>>Tagging of Pacific Pelagics Project
>>University of California Santa Cruz
>>Long Marine Lab - Center for Ocean Health
>>100 Shaffer Road
>>Santa Cruz, CA 95060-5730
>>(831) 459-2691
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 9: In versions below 8.0, the planner will ignore your desire to
>>       choose an index scan if your joining column's datatypes do not
>>       match
>

--------------------------------------------------
Robin Weber
Data Manager and Application Developer for
Marine Mammals and Birds
Tagging of Pacific Pelagics Project
University of California Santa Cruz
Long Marine Lab - Center for Ocean Health
100 Shaffer Road
Santa Cruz, CA 95060-5730
(831) 459-2691




Re: ODBC Driver Failure- MS Access- Large record volume

От
"Dave Page"
Дата:

> -----Original Message-----
> From: pgsql-odbc-owner@postgresql.org
> [mailto:pgsql-odbc-owner@postgresql.org] On Behalf Of Robin Weber
> Sent: 09 September 2005 01:44
> To: Greg Campbell
> Cc: pgsql-odbc@postgresql.org
> Subject: Re: [ODBC] ODBC Driver Failure- MS Access- Large
> record volume
>
> 08.00.0102
> I had the same problem running a couple of different versions
> in the 7.0
> series.

Please try 08.01.0003.

> Oh, also, the link from the Postgres web site has changed,
> and I'm not sure
> where to get new versions of the driver.  Any help with my confusion?

Err, the download pages haven't changed for months. You can get the
08.01.0003 version from
http://www.postgresql.org/ftp/odbc/versions/snapshots/. Yes it is a
snapshot, but it is far more stable than the earlier 08.00 versions.

Regards, Dave.

Re: ODBC Driver Failure- MS Access- Large record volume

От
"Greg Campbell"
Дата:
I hope using the driver version Dave P. suggests solves your problem.

It sounds difficult to troubleshoot. I would say use care when turning logging on at the server. Not so
much because of resources, but you need to configure PostgreSQL for what you want to log. That is edit the
postgresql.conf file. Note the log_statement parameter. You could start with just logging connections to
see if any fail. It seems like it would be difficult to log millions of transactions to find one error.
That's a hec of log file to look through. And turning on ODBC logging (client side via the ODBC
Administrator) could (would) be even more taxing.

I would expect that ODBC Direct or INSERTS in general to provide much better perfromance than using DAO
for AddNew.

In general, the thing you want is as much information as possible when an error is thrown. I haven't used
ODCBDirect extensively so I am not sure about the best way to determine if PostgreSQL server threw  a
native error, or Access, or some "container" is throwing the error. With your "ODBC Call failed", it would
be nice to know if there are err.Number, err.Source, and to know if the ODBCDirect Workspace contains an
actual collection of errors instead of just one.

Some others might know if ADO provides a better object model for what you are trying to do.

Also note basic observations, like do failures happen at the beginning or in the middle of processing
thousands or transaction? Is Access constantly open or does it get opened afresh periodically? Do you
process data in sessions (one now and again later). How far apart are the sessions. Is there a patterm
relative to the failures?






Robin Weber wrote:
> Hi Greg,
>
> Thanks so much for your response.  It's nice to get a dialog going.  To
> answer your questions:
>
>> 1. What version of the ODBC driver are you using? (I found my  version
>> using REGEDIT to key
>> HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
>
>
> 08.00.0102
> I had the same problem running a couple of different versions in the 7.0
> series.
>
>> 2. What version of Access are you using?
>
>
> MS Access 2003 SP1
>
>> 3. What Access library are you using -DAO or ADO?
>
>
> Usually DAO, but I've had the same problem using ODBCDirect, repeatedly
> executing SQL lines (such as inserts) several million times.
>
>> 4. Since Access is typically interactive, how are you transacting
>> these millions of records, reading a file or is some external device
>> driving Access?
>
>
> I'm reading text files, parsing the lines, loading variables, and either
> processing AddNew's on a linked table, or doing Inserts in an ODBCDirect
> workspace
>
>> 5. Are you connecting though the Access objects (normal queries and
>> tables), or using pass-through?
>
>
> Both, as above.
>
>> 6. What version of Postgres server are you using? Linux or Windows?
>
>
> 8.0.3 under Windows XP, unfortunately.  (Our lab is Linux-deficient)
>
>> 7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)
>
>
> Not yet.  I will activate it.  Can you give me a hint on what to look
> for in the log?
>
> I will experiment with getting a hold of the connection object and
> closing and opening it to start a new driver instance.  It takes a long
> time of stuffing data through the driver to re-create the crash, so it's
> hard to know when I've gotten around the bug or not.
>
> Please let me know if my answers invite any new information.
>
> Oh, also, the link from the Postgres web site has changed, and I'm not
> sure where to get new versions of the driver.  Any help with my confusion?
>
> With thanks,
> Robin
>
> At 02:29 PM 9/8/2005, you wrote:
>
>> Hello and welcome Robin,
>> I find the key to getting help is giving enough information.
>>
>> You might answer a few questions.
>> 1. What version of the ODBC driver are you using? (I found my  version
>> using REGEDIT to key
>> HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
>> 2. What version of Access are you using?
>> 3. What Access library are you using -DAO or ADO?
>> 4. Since Access is typically interactive, how are you transacting
>> these millions of records, reading a file or is some external device
>> driving Access?
>> 5. Are you connecting though the Access objects (normal queries and
>> tables), or using pass-through?
>> 6. What version of Postgres server are you using? Linux or Windows?
>> 7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)
>>
>>
>> By the way other list memebers, after installation is there a
>> recommended way to determine the pgODBC driver version?
>>
>> Robin,
>> Before knowing any of these answers I can say
>> the  ADO Connection object is your means of "opening and closing" the
>> ODBC (or OLE/DB) driver. Open and close often is the rule of thumb for
>> interacting with server database. There is typically caching going on
>> for recent connections, so the opening a new connection is not the
>> resource hog it might seem. If all that is ok, you may indeed have a
>> pgODBC  driver version with a leak.
>>
>> And as a complete aside because it has bitten me too many times, set
>> the Jet ConnectionTimeout under
>> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0\ to 0
>> (never timeout). A jet connection (linked table) will loose
>> connectivity after a default time of 60 minutes of inactivity unless
>> this key is set to some other value (longer or shorter) or 0.
>>
>> Refreshing linked tables is also do-able. You should be able to google
>> it or write back and someone will no doubt expound upon it. The Access
>> (version) Handbook's from Sybex are excellent references for the
>> mundane to the arcane.
>>
>>
>>
>> Robin Weber wrote:
>>
>>> Hello Community,
>>> I'm new to this list.
>>> I am a user of the Postgres ODBC driver, mainly under MS Access and
>>> Matlab.  We deal in very large tabular datasets.  (My largest table
>>> is currently 72 million records, and it will grow several times that
>>> size before the year is out.
>>> The driver fails after several million record inserts (under access).
>>> Once failed (with nothing more helpful tan an "ODBC call failed"
>>> message box), it will process no further requests.  (I believe it
>>> will also fail after receiving that much data, as well a sending.)
>>> (I would think of a memory leak of some sort, but in my 27 pro years
>>> of programming, I have never dealt with the inner workings of an ODBC
>>> driver, and I don't know how to get driver error information.  This
>>> is also my first open source experience, so please forgive my naivete.)
>>> I would like to figure out how to fix this problem.  I'm willing to
>>> help debug, but I need help getting started.  How do I go about
>>> trying to understand this problem?
>>> One thing that might be helpful:  Is there a programmatic way to get
>>> Access to close and unload, then reopen the ODBC driver without
>>> actually quitting Access?  I notice that when I start Access "fresh"
>>> I can pump another several million records through before failing.
>>> If I can make my programs unload then reload the driver every few
>>> million records, it might make a good workaround for my problem.  Any
>>> clues out there?
>>> With thanks,
>>> Robin
>>>
>>> --------------------------------------------------
>>> Robin Weber
>>> Data Manager and Application Developer for
>>> Marine Mammals and Birds
>>> Tagging of Pacific Pelagics Project
>>> University of California Santa Cruz
>>> Long Marine Lab - Center for Ocean Health
>>> 100 Shaffer Road
>>> Santa Cruz, CA 95060-5730
>>> (831) 459-2691
>>>
>>>
>>> ---------------------------(end of broadcast)---------------------------
>>> TIP 9: In versions below 8.0, the planner will ignore your desire to
>>>       choose an index scan if your joining column's datatypes do not
>>>       match
>>
>>
>
> --------------------------------------------------
> Robin Weber
> Data Manager and Application Developer for
> Marine Mammals and Birds
> Tagging of Pacific Pelagics Project
> University of California Santa Cruz
> Long Marine Lab - Center for Ocean Health
> 100 Shaffer Road
> Santa Cruz, CA 95060-5730
> (831) 459-2691
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly

Вложения

Re: ODBC Driver Failure- MS Access- Large record volume

От
Marko Ristola
Дата:
Hi.

I haven't read all you emails.
If you use psqlodbc, you need the following thing:

psqlodbc driver must be configured to support several millions of
records for 32 bit operating
systems:

ODBC handle must have:
UseDeclareFetch=1
Fetch=32

So this enables a feature, that does use about a constant amount of memory
during SELECT. You are able to query tens of millions of rows without a
memory
allocation failure. So the problem seems to be with you, a memory
allocation failure.

Without the above configuration, the crash point might come with 8
million of rows.
The crash point depends on the operating system and it's version as well as
the average size of one query result row in memory.

The above configuration option affects query plans also,
but at least there is no crash.

I don't know, wether the application itself supports so many rows.
psqlodbc should be fine
with the correct options. It would be nice, if you can verify that your
psqlodbc driver version
works with the constant amount of memory.

Regards, Marko Ristola

Greg Campbell wrote:

> I hope using the driver version Dave P. suggests solves your problem.
>
> It sounds difficult to troubleshoot. I would say use care when turning
> logging on at the server. Not so much because of resources, but you
> need to configure PostgreSQL for what you want to log. That is edit
> the postgresql.conf file. Note the log_statement parameter. You could
> start with just logging connections to see if any fail. It seems like
> it would be difficult to log millions of transactions to find one
> error. That's a hec of log file to look through. And turning on ODBC
> logging (client side via the ODBC Administrator) could (would) be even
> more taxing.
>



Re: ODBC Driver Failure- MS Access- Large record volume

От
Robin Weber
Дата:
Hello,

I'm finally getting back to this.  Thanks all for your responses.

I have found that using and ODBCDirect workspace to process discrete SQL
INSERTs or UPDATEs, and periodically closing and re-opening the workspace,
I can continue to process my millions of records without crashing.  (At
least with driver 8.00.01.02.)   The files that I am merging into tables
are 1-3 million records, and it is convenient and natural to close and
re-open the workspace between these files.

This is an acceptable work-around for my problem, and it sounds like
performance my be better anyway than using DAO recordsets for entering
these large volumes of data.

If any of the developers are interested enough, I can be convinced to
develop a simple scenario to re-create the problem.  Please let me know.

Thanks to all,
Robin

At 12:01 PM 9/9/2005, Greg Campbell wrote:
>I hope using the driver version Dave P. suggests solves your problem.
>
>It sounds difficult to troubleshoot. I would say use care when turning
>logging on at the server. Not so much because of resources, but you need
>to configure PostgreSQL for what you want to log. That is edit the
>postgresql.conf file. Note the log_statement parameter. You could start
>with just logging connections to see if any fail. It seems like it would
>be difficult to log millions of transactions to find one error. That's a
>hec of log file to look through. And turning on ODBC logging (client side
>via the ODBC Administrator) could (would) be even more taxing.
>
>I would expect that ODBC Direct or INSERTS in general to provide much
>better perfromance than using DAO for AddNew.
>
>In general, the thing you want is as much information as possible when an
>error is thrown. I haven't used ODCBDirect extensively so I am not sure
>about the best way to determine if PostgreSQL server threw  a native
>error, or Access, or some "container" is throwing the error. With your
>"ODBC Call failed", it would be nice to know if there are err.Number,
>err.Source, and to know if the ODBCDirect Workspace contains an actual
>collection of errors instead of just one.
>
>Some others might know if ADO provides a better object model for what you
>are trying to do.
>
>Also note basic observations, like do failures happen at the beginning or
>in the middle of processing thousands or transaction? Is Access constantly
>open or does it get opened afresh periodically? Do you process data in
>sessions (one now and again later). How far apart are the sessions. Is
>there a patterm relative to the failures?
>
>
>
>
>
>
>Robin Weber wrote:
>>Hi Greg,
>>Thanks so much for your response.  It's nice to get a dialog going.  To
>>answer your questions:
>>
>>>1. What version of the ODBC driver are you using? (I found my  version
>>>using REGEDIT to key
>>>HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
>>
>>08.00.0102
>>I had the same problem running a couple of different versions in the 7.0
>>series.
>>
>>>2. What version of Access are you using?
>>
>>MS Access 2003 SP1
>>
>>>3. What Access library are you using -DAO or ADO?
>>
>>Usually DAO, but I've had the same problem using ODBCDirect, repeatedly
>>executing SQL lines (such as inserts) several million times.
>>
>>>4. Since Access is typically interactive, how are you transacting these
>>>millions of records, reading a file or is some external device driving Access?
>>
>>I'm reading text files, parsing the lines, loading variables, and either
>>processing AddNew's on a linked table, or doing Inserts in an ODBCDirect
>>workspace
>>
>>>5. Are you connecting though the Access objects (normal queries and
>>>tables), or using pass-through?
>>
>>Both, as above.
>>
>>>6. What version of Postgres server are you using? Linux or Windows?
>>
>>8.0.3 under Windows XP, unfortunately.  (Our lab is Linux-deficient)
>>
>>>7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)
>>
>>Not yet.  I will activate it.  Can you give me a hint on what to look for
>>in the log?
>>I will experiment with getting a hold of the connection object and
>>closing and opening it to start a new driver instance.  It takes a long
>>time of stuffing data through the driver to re-create the crash, so it's
>>hard to know when I've gotten around the bug or not.
>>Please let me know if my answers invite any new information.
>>Oh, also, the link from the Postgres web site has changed, and I'm not
>>sure where to get new versions of the driver.  Any help with my confusion?
>>With thanks,
>>Robin
>>At 02:29 PM 9/8/2005, you wrote:
>>
>>>Hello and welcome Robin,
>>>I find the key to getting help is giving enough information.
>>>
>>>You might answer a few questions.
>>>1. What version of the ODBC driver are you using? (I found my  version
>>>using REGEDIT to key HKEY_CURRENT_USER\Software\Microsoft\Installer\Products)
>>>2. What version of Access are you using?
>>>3. What Access library are you using -DAO or ADO?
>>>4. Since Access is typically interactive, how are you transacting these
>>>millions of records, reading a file or is some external device driving Access?
>>>5. Are you connecting though the Access objects (normal queries and
>>>tables), or using pass-through?
>>>6. What version of Postgres server are you using? Linux or Windows?
>>>7. Is logging turned on at the server? (e.g. pg_ctl start -l my_pg.log)
>>>
>>>
>>>By the way other list memebers, after installation is there a
>>>recommended way to determine the pgODBC driver version?
>>>
>>>Robin,
>>>Before knowing any of these answers I can say
>>>the  ADO Connection object is your means of "opening and closing" the
>>>ODBC (or OLE/DB) driver. Open and close often is the rule of thumb for
>>>interacting with server database. There is typically caching going on
>>>for recent connections, so the opening a new connection is not the
>>>resource hog it might seem. If all that is ok, you may indeed have a
>>>pgODBC  driver version with a leak.
>>>
>>>And as a complete aside because it has bitten me too many times, set the
>>>Jet ConnectionTimeout under
>>>HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Jet 4.0\ to 0
>>>(never timeout). A jet connection (linked table) will loose connectivity
>>>after a default time of 60 minutes of inactivity unless this key is set
>>>to some other value (longer or shorter) or 0.
>>>
>>>Refreshing linked tables is also do-able. You should be able to google
>>>it or write back and someone will no doubt expound upon it. The Access
>>>(version) Handbook's from Sybex are excellent references for the mundane
>>>to the arcane.
>>>
>>>
>>>
>>>Robin Weber wrote:
>>>
>>>>Hello Community,
>>>>I'm new to this list.
>>>>I am a user of the Postgres ODBC driver, mainly under MS Access and
>>>>Matlab.  We deal in very large tabular datasets.  (My largest table is
>>>>currently 72 million records, and it will grow several times that size
>>>>before the year is out.
>>>>The driver fails after several million record inserts (under access).
>>>>Once failed (with nothing more helpful tan an "ODBC call failed"
>>>>message box), it will process no further requests.  (I believe it will
>>>>also fail after receiving that much data, as well a sending.)
>>>>(I would think of a memory leak of some sort, but in my 27 pro years of
>>>>programming, I have never dealt with the inner workings of an ODBC
>>>>driver, and I don't know how to get driver error information.  This is
>>>>also my first open source experience, so please forgive my naivete.)
>>>>I would like to figure out how to fix this problem.  I'm willing to
>>>>help debug, but I need help getting started.  How do I go about trying
>>>>to understand this problem?
>>>>One thing that might be helpful:  Is there a programmatic way to get
>>>>Access to close and unload, then reopen the ODBC driver without
>>>>actually quitting Access?  I notice that when I start Access "fresh" I
>>>>can pump another several million records through before failing.
>>>>If I can make my programs unload then reload the driver every few
>>>>million records, it might make a good workaround for my problem.  Any
>>>>clues out there?
>>>>With thanks,
>>>>Robin
>>>>
>>>>--------------------------------------------------
>>>>Robin Weber
>>>>Data Manager and Application Developer for
>>>>Marine Mammals and Birds
>>>>Tagging of Pacific Pelagics Project
>>>>University of California Santa Cruz
>>>>Long Marine Lab - Center for Ocean Health
>>>>100 Shaffer Road
>>>>Santa Cruz, CA 95060-5730
>>>>(831) 459-2691
>>>>
>>>>
>>>>---------------------------(end of broadcast)---------------------------
>>>>TIP 9: In versions below 8.0, the planner will ignore your desire to
>>>>       choose an index scan if your joining column's datatypes do not
>>>>       match
>>>
>>--------------------------------------------------
>>Robin Weber
>>Data Manager and Application Developer for
>>Marine Mammals and Birds
>>Tagging of Pacific Pelagics Project
>>University of California Santa Cruz
>>Long Marine Lab - Center for Ocean Health
>>100 Shaffer Road
>>Santa Cruz, CA 95060-5730
>>(831) 459-2691
>>
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 1: if posting/reading through Usenet, please send an appropriate
>>       subscribe-nomail command to majordomo@postgresql.org so that your
>>       message can get through to the mailing list cleanly
>
>

--------------------------------------------------
Robin Weber
Data Manager and Application Developer for
Marine Mammals and Birds
Tagging of Pacific Pelagics Project
University of California Santa Cruz
Long Marine Lab - Center for Ocean Health
100 Shaffer Road
Santa Cruz, CA 95060-5730
(831) 459-2691