table and add in an identity column. The code I used caused SQL to
lock up and it maxed out the log files. :)
The code I used is:
Begin Transaction
Alter Table ODS_DAILY_SALES_POS
ADD ODS_DAILY_SALES_POS_ID BigInt NOT NULL IDENTITY (1,1)
Commit
Is there a way to break up the code? Maybe only do a few million
records at a time? Or is there a way to do this without locking
anything up?
Thanks,
Jennifer>> I've been asked to modify the table and add in an identity column.
The code I used caused SQL to lock up and it maxed out the log files.
<<
Do you have any idea why anyone would want to do this in the first
place? The idiot does not seem to understand that IDENTITY has no
meaning in a data model?|||[posted and mailed, please reply in news]
Jennifer (jennifer1970@.hotmail.com) writes:
> I've got a table with 36+ million rows. I've been asked to modify the
> table and add in an identity column. The code I used caused SQL to
> lock up and it maxed out the log files. :)
> The code I used is:
> Begin Transaction
> Alter Table ODS_DAILY_SALES_POS
> ADD ODS_DAILY_SALES_POS_ID BigInt NOT NULL IDENTITY (1,1)
> Commit
> Is there a way to break up the code? Maybe only do a few million
> records at a time? Or is there a way to do this without locking
> anything up?
The alternative is to rename the table and all its constraints,
create the table and new with constraints, triggers and indexes
and insert the data into that table. You can then do a loop which
takes a reasonable number of rows at a time. That requires, however,
that you somehow, can identify which rows you have copied and which
you have not. An advice is to perform the loop on the clustered of
the table. Once data has been copied, move referencing foreigh keys
to point to the new table, and then drop the old table.
The advantage of this approach is that the strain on the log is less,
particularly, if you permit yourself to switch to simple recovery
while you are running the move.
Note: above I said that you should recreate triggers and indexes. It
may be a good idea to do that after the copying is completed, but
just don't forget it. (You should package everything in a script
and first test in database where the table is smaller.)
A variation is to bulk out the data, and then bulk it in when the
table has no indexes. If you have bulk_logged recovery, this load will
be very fast. Personally, I prefer to create the clustered index first,
before I load, since building the index takes its time too.
Yet a variation is to use SELECT INTO (with which you can use
the IDENTITY function). SELECT INTO is also minimally logged when
you have bulk_logged recovery.
Finally, you have set your colunm to bigint. With 36 million rows, you
have a long way to go, before you 31 bits become too few for you.
--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
No comments:
Post a Comment