Обсуждение: transaction limits?
Are there known limits to how many rows can be inserted by one transaction, or does that just reflect the already documented row, table and database limits?
jeff sacksteder wrote: > Are there known limits to how many rows can be inserted by one transaction, > or does that just reflect the already documented row, table and database > limits? Well, the system will need to be able to roll back the transaction, so at some point your system will grind to a halt. I shouldn't be surprised if there was some counter that couldn't cope beyond 2^31 rows too but no-one's found it yet. -- Richard Huxton Archonet Ltd
On 10/21/05, Richard Huxton <dev@archonet.com> wrote: > jeff sacksteder wrote: > > > Are there known limits to how many rows can be inserted by one transaction, > > or does that just reflect the already documented row, table and database > > limits? > > Well, the system will need to be able to roll back the transaction, so > at some point your system will grind to a halt. I shouldn't be surprised > if there was some counter that couldn't cope beyond 2^31 rows too but > no-one's found it yet. Just by not indicating that a transaction did commit, others will keep ignoring its rows. There is nothing to rollback here, thanks to MVCC. Of course, those rows will still be physically present until the next VACUUM. -- Nicolas Barbier http://www.gnu.org/philosophy/no-word-attachments.html
Nicolas Barbier wrote: > On 10/21/05, Richard Huxton <dev@archonet.com> wrote: > > >>jeff sacksteder wrote: >> >> >>>Are there known limits to how many rows can be inserted by one transaction, >>>or does that just reflect the already documented row, table and database >>>limits? >> >>Well, the system will need to be able to roll back the transaction, so >>at some point your system will grind to a halt. I shouldn't be surprised >>if there was some counter that couldn't cope beyond 2^31 rows too but >>no-one's found it yet. > > > Just by not indicating that a transaction did commit, others will keep > ignoring its rows. There is nothing to rollback here, thanks to MVCC. > Of course, those rows will still be physically present until the next > VACUUM. D'oh - thanks Jeff. Due to brain malfunction I'd typed "roll back" instead of "commit". What I was trying to get at was that if you commit a zillion rows on your laptop you can sit there all day with your I/O saturated while the WAL writes it out. -- Richard Huxton Archonet Ltd
On Fri, Oct 21, 2005 at 12:25:36PM +0200, Nicolas Barbier wrote: > On 10/21/05, Richard Huxton <dev@archonet.com> wrote: > > jeff sacksteder wrote: > > > Are there known limits to how many rows can be inserted by one transaction, > > > Well, the system will need to be able to roll back the transaction, ... > > Just by not indicating that a transaction did commit, others will keep > ignoring its rows. There is nothing to rollback here, thanks to MVCC. Also, I've just done 300 million row INSERTs, from a 'mysqldump', inside a single transaction. So there doesn't appear to be any implementation problems that would stop you from reaching the theoretical table limits. -Mark
jeff sacksteder wrote: > Are there known limits to how many rows can be inserted by one > transaction, or does that just reflect the already documented row, table > and database limits? An error string in the source code stands: "cannot have more than 2^32-1 commands in a transaction" (#: access/transam/xact.c:510) HTH, -- MaXX
jeff sacksteder <jsacksteder@gmail.com> writes: > Are there known limits to how many rows can be inserted by one transaction, > or does that just reflect the already documented row, table and database > limits? Offhand I think the only limit that might concern you is the CID limit of 2^32 SQL commands per transaction. This does not directly stop a single command from processing any number of rows --- but, for example, if you have a per-row trigger that contains 10 SQL commands, you could only fire it about 400 million times within one transaction. regards, tom lane
MaXX <bs139412@skynet.be> writes: > jeff sacksteder wrote: > >> Are there known limits to how many rows can be inserted by one >> transaction, or does that just reflect the already documented row, table >> and database limits? > An error string in the source code stands: > "cannot have more than 2^32-1 commands in a transaction" > (#: access/transam/xact.c:510) That means you couldn't do more than 2^32-1 INSERT statements. But that wouldn't (in principle) prevent having each of the 2^32-1 commands use COPY to insert 2^31 rows, which would presumably mean that the limit on the number of rows insertable could be 2^63-1. -- let name="cbbrowne" and tld="cbbrowne.com" in String.concat "@" [name;tld];; http://www.ntlug.org/~cbbrowne/x.html "I am aware of the benefits of a micro kernel approach. However, the fact remains that Linux is here, and GNU isn't --- and people have been working on Hurd for a lot longer than Linus has been working on Linux." -- Ted T'so, 1992.
Chris Browne <cbbrowne@acm.org> writes: > MaXX <bs139412@skynet.be> writes: >> An error string in the source code stands: >> "cannot have more than 2^32-1 commands in a transaction" > That means you couldn't do more than 2^32-1 INSERT statements. Right. > But that wouldn't (in principle) prevent having each of the 2^32-1 commands > use COPY to insert 2^31 rows, which would presumably mean that the > limit on the number of rows insertable could be 2^63-1. In theory a single COPY command can insert an unlimited number of rows, or anyway up till you run up against the N-terabyte table size limit. (Its internal row counter would wrap around of course, but I'm pretty sure that's only used for error reporting.) The case where the CID limit would affect a COPY is if you have triggers on the table being inserted into, and the triggers themselves perform SQL commands --- then you'd run into the CID limit in some number of rows depending on how many commands per trigger function. regards, tom lane