Re: Fastest way to insert/update many rows
От | Daniele Varrazzo |
---|---|
Тема | Re: Fastest way to insert/update many rows |
Дата | |
Msg-id | CA+mi_8bGfzhEr0+t2FjZGDRnQP45MC1E3C_djdBem_xZQXsD8A@mail.gmail.com обсуждение исходный текст |
Ответ на | Fastest way to insert/update many rows (pascal@ensieve.org) |
Список | psycopg |
On Tue, Aug 12, 2014 at 2:46 PM, <pascal@ensieve.org> wrote: > Hi, > > I'd like to psycopg2 to fetch a large number of rows (hundreds of millions), perform some computations and put them backinto the database. > > I can fetch about 130k rows/sec with > cur.execute('select * from stuff') > keyvals = list(cur) > and 100k/sec with > f = io.StringIO() > cur.copy_to(f, 'stuff') > f.seek(0) > keyvals = list(tuple(map(int, l.split('\t'))) for l in f) > > but inserting using > cur.executemany('insert into stuff values (%s, %s)', keyvals) > only has a throughput of 23k/sec with ca. 20% CPU used by Python, 80% by Postgres, while > cur.copy_from(io.StringIO('\n'.join('{}\t{}'.format(*r) for r in keyvals)), 'stuff') > manages to insert 1.8M/sec. > > I can't quite believe that generating a string should be the fastest method, am I missing something? No, copy is by large the fastest method of loading data into postgres but psycopg doesn't currently offer adaptation support for copy: in the current version composing a string in a file-like object is what it takes. > What I'd really like to do is > cur.executemany('update stuff set value = %s where key = %s', ...) > but that was orders of magnitude slower still; probably because the order is random, so it performs an index lookup foreach key. > Populating a temporary table and using 'update stuff ... from temptable ...' is quicker. Yes: using copy to populate a temp table and then update via a query is the fastest way to bulk-update in postgres, regardless of the psycopg usage. > I have to set one column in each row, is there a way to update cursors like in PL/pgSQL's > update <table> set ... where current of <cursor> > i.e. iterate through the rows in the most efficient way for the database. > > > Or would it be wiser to use PL/Python for this kind of task instead? Well, if you can do everything server-side only, using a server-side language would save a double roundtrip for the whole dataset and should be a win. You are probably I/O bound anyway so PL/pgSQL or PL/Python shouldn't make much difference performance-wise: you may choose on other language features (python is more expressive but it's an untrusted language and from the docs I don't think it supports cursors for update). -- Daniele
В списке psycopg по дате отправления: