Re: Duplicate key insert question
От | Dann Corbit |
---|---|
Тема | Re: Duplicate key insert question |
Дата | |
Msg-id | D90A5A6C612A39408103E6ECDD77B829408B50@voyager.corporate.connx.com обсуждение исходный текст |
Ответ на | Duplicate key insert question (Jean-Christian Imbeault <jc@mega-bucks.co.jp>) |
Список | pgsql-general |
> -----Original Message----- > From: Jean-Christian Imbeault [mailto:jc@mega-bucks.co.jp] > Sent: Tuesday, July 01, 2003 5:10 PM > To: pgsql-general@postgresql.org > Subject: [GENERAL] Duplicate key insert question > > > I have a table with a primary field and a few other fields. > What is the > fastest way to do an insert into that table assuming that sometimes I > might try to insert a record with a duplicate primary key and > want that > to fail? > > I know that if I try a plain insert this will work, but in the case > where I am trying to insert a duplicate key, the insert fails (as it > should) and an error is logged. > > I could first do a check to see if there is already an entry with the > same key as the one I am trying to insert but that would mean quite a > few extra operations. > > Is there a quick and clean way of doing something like: > > INSERT into table values(...) IF there isn't already a row with pkey=x > > Thanks, > > Jean-Christian Imbeault > > PS The reason I am asking is that an open source project is > using MySQL > as their DB and they have a port to PG that isn't very clean > b/c the DB > code keeps trying to insert duplicate primary keys. According to them > MySQL doesn't complain and just drops the insert whereas PG (as is > right) complains. I've offered to clean up their PG insertion > code but > they say that they don't want too many extra checks as their > app writes > to the DB a *lot* and any extra check is going to slow down the > application noticeably ... SQL*Server has a nice feature for this. When you create an index, you can ignore duplicate key attempts (nothing happens, and an informational level warning is raised). I find that it is an extremely useful feature for things like creation of dictionaries or domains. ======================================================================== ==== Syntax CREATE [UNIQUE] [CLUSTERED | NONCLUSTERED] INDEX index_name ON table (column [,...n]) [WITH [PAD_INDEX] [[,] FILLFACTOR = fillfactor] [[,] IGNORE_DUP_KEY] [[,] DROP_EXISTING] [[,] STATISTICS_NORECOMPUTE] ] [ON filegroup] IGNORE_DUP_KEY Controls what happens when an attempt is made to insert a duplicate key value into a column that is part of a unique clustered index. If IGNORE_DUP_KEY was specified for the index and an INSERT statement that creates a duplicate key is executed, SQL Server issues a warning message and ignores (does not insert) the duplicate row. If IGNORE_DUP_KEY was not specified for the index, SQL Server issues an error message and rolls back the entire INSERT statement. A unique index cannot be created on a column that already includes duplicate values, whether or not IGNORE_DUP_KEY is set. If attempted, SQL Server displays an error message and lists the duplicate values. Eliminate the duplicate values before creating a unique index on the column. E. Use the IGNORE_DUP_KEY This example creates a unique clustered index on the emp_pay table. If a duplicate key is entered, the INSERT or UPDATE statement is ignored. SET NOCOUNT ON USE pubs IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'emp_pay') DROP TABLE emp_pay GO USE pubs IF EXISTS (SELECT name FROM sysindexes WHERE name = 'employeeID_ind') DROP INDEX emp_pay.employeeID_ind GO USE pubs GO CREATE TABLE emp_pay ( employeeID int NOT NULL, base_pay money NOT NULL, commission decimal(2, 2) NOT NULL ) INSERT emp_pay VALUES (1, 500, .10) INSERT emp_pay VALUES (2, 1000, .05) INSERT emp_pay VALUES (3, 800, .07) INSERT emp_pay VALUES (5, 1500, .03) INSERT emp_pay VALUES (9, 750, .06) GO SET NOCOUNT OFF GO CREATE UNIQUE CLUSTERED INDEX employeeID_ind ON emp_pay(employeeID) WITH IGNORE_DUP_KEY ======================================================================== ====
В списке pgsql-general по дате отправления: