large updates and deletes
large updates and deletes
am 17.01.2006 17:33:07 von sigzero
I have been handed a couple of requests to change some data. One is an
update on 3.6 million records and the other is two different deletions of
over 16 million rows of data. I need to be able to do these in batches as I
am told that Oracle will "blow up" if I do them in one shot.
I would like to do them in increments of 50,000 then a commit...then the
next 50,000 and then another commit.
What it is the best way to do this?
Robert
Re: large updates and deletes
am 17.01.2006 19:49:45 von Martin.Hall
Robert Hicks wrote:
>I have been handed a couple of requests to change some data. One is an
>update on 3.6 million records and the other is two different deletions of
>over 16 million rows of data. I need to be able to do these in batches as I
>am told that Oracle will "blow up" if I do them in one shot.
>
>I would like to do them in increments of 50,000 then a commit...then the
>next 50,000 and then another commit.
>
>What it is the best way to do this?
>
>Robert
>
>
perhaps not strictly dbi, but I would bundle the delete statements into
a pl/sql block which means you don't get a lot of back and forth. From
the top of my head, *without testing* (or even a whole lot of thought),
it could be something like:
declare
rows_processed number;
batch_size number := 50000;
begin
rows_processed := batch_size;
while rows_processed = batch_size loop
delete from table where rownum < batch_size + 1;
rows_processed := SQL%ROWCOUNT;
commit;
end loop;
end;
The alternative is to go ask your DBA if he or she has a rollback
segment they use for large transactions.
Martin
Re: Re: large updates and deletes
am 18.01.2006 17:38:30 von murkyd
>>I have been handed a couple of requests to change some data. One is an
>>update on 3.6 million records and the other is two different deletions of
>>over 16 million rows of data. I need to be able to do these in batches as=
I
>>am told that Oracle will "blow up" if I do them in one shot.
>>
>>I would like to do them in increments of 50,000 then a commit...then the
>>next 50,000 and then another commit.
>>
>>What it is the best way to do this?
>>
>>Robert
>>
>>
>perhaps not strictly dbi, but I would bundle the delete statements into
>pl/sql block which means you don't get a lot of back and forth. From
>the top of my head, *without testing* (or even a whole lot of thought),
>it could be something like:
>
>declare
>rows_processed number;
>batch_size number :=3D 50000;
>begin
> rows_processed :=3D batch_size;
> while rows_processed =3D batch_size loop
> delete from table where rownum < batch_size + 1;
> rows_processed :=3D SQL%ROWCOUNT;
> commit;
> end loop;
>end;
This reminds one of Knuth's famous quip, "Premature optimization is
the root of all evil."
This optimization might have made some sense decades over a serial line, bu=
t
not today. It is saving 320 single-packet round trips, which is an
infintesimal amount of traffic.
Mark
Re: Re: large updates and deletes
am 18.01.2006 19:38:35 von amonotod
---- mark d wrote:
> This optimization might have made some sense decades over a serial line, but
> not today. It is saving 320 single-packet round trips, which is an
> infintesimal amount of traffic.
Well, traffic wise, you're right. But, it also saves parsing all those rows out of the DB and through the DBI; this way the only process that has to work with the data is the oracle engine. Use DBI where appropriate... Right tool for the job, and all that...
> Mark
v/r,
amonotod
--
`\|||/ amonotod@ | sun|perl|windows
(@@) charter.net | sysadmin|dba
ooO_(_)_Ooo____________________________________
_____|_____|_____|_____|_____|_____|_____|_____|
Re: Re: large updates and deletes
am 18.01.2006 20:04:52 von murkyd
On 1/18/06, amonotod@charter.net wrote:
> ---- mark d wrote:
> > This optimization might have made some sense decades over a serial line=
, but
> > not today. It is saving 320 single-packet round trips, which is an
> > infintesimal amount of traffic.
>
> Well, traffic wise, you're right. But, it also saves parsing all those r=
ows out of the
> DB and through the DBI; this way the only process that has to work with t=
he data
> is the oracle engine. Use DBI where appropriate... Right tool for the j=
ob, and all that...
I have no idea what "parsing all those rows out of the DB and through the D=
BI"
means, though it sounds like you think row data is copied from the DB to
the DBI in the course of a sql DELETE statement (that would be a fundamenta=
l
misunderstanding of SQL and DBI.)
In any case, you're mistaken that the PL/SQL saves anything but a few packe=
ts.
To be explicit, the proposed PL/SQL was:
>declare
>rows_processed number;
>batch_size number :=3D 50000;
>begin
> rows_processed :=3D batch_size;
> while rows_processed =3D batch_size loop
> delete from table where rownum < batch_size + 1;
> rows_processed :=3D SQL%ROWCOUNT;
> commit;
> end loop;
>end;
The equivalent DBI code is:
$sth =3D $dbh->prepare("delete from TABLE where rownum < 50001");
while ( ($deleted =3D $sth->execute) > 0) {
printf "deleted $deleted\n";
}
The only difference between the two is a few network packets.
Re: large updates and deletes
am 18.01.2006 21:54:56 von sigzero
mark d wrote:
> On 1/18/06, amonotod@charter.net wrote:
>> ---- mark d wrote:
>>> This optimization might have made some sense decades over a serial line, but
>>> not today. It is saving 320 single-packet round trips, which is an
>>> infintesimal amount of traffic.
>> Well, traffic wise, you're right. But, it also saves parsing all those rows out of the
>> DB and through the DBI; this way the only process that has to work with the data
>> is the oracle engine. Use DBI where appropriate... Right tool for the job, and all that...
>
> I have no idea what "parsing all those rows out of the DB and through the DBI"
> means, though it sounds like you think row data is copied from the DB to
> the DBI in the course of a sql DELETE statement (that would be a fundamental
> misunderstanding of SQL and DBI.)
>
> In any case, you're mistaken that the PL/SQL saves anything but a few packets.
> To be explicit, the proposed PL/SQL was:
>
>> declare
>> rows_processed number;
>> batch_size number := 50000;
>> begin
>> rows_processed := batch_size;
>> while rows_processed = batch_size loop
>> delete from table where rownum < batch_size + 1;
>> rows_processed := SQL%ROWCOUNT;
>> commit;
>> end loop;
>> end;
>
> The equivalent DBI code is:
>
> $sth = $dbh->prepare("delete from TABLE where rownum < 50001");
> while ( ($deleted = $sth->execute) > 0) {
> printf "deleted $deleted\n";
>
>
> The only difference between the two is a few network packets.
Is that the only difference? I read it, and I could be wrong, that your
SQL will stop when rownum hits 50001, while his will keep going in
batches of 50000 until there isn't any more to delete given the delete
statement.
Yours is a simple delete. I need to break up my deletes into manageable
sizes, do the delete, do the next batch, etc.
On the delete side I have about 16 million records that will be deleted.
I have two of those to do. I have one update that changes a few values
on about 3.1 million records (not the whole table).
Robert
Re: large updates and deletes
am 18.01.2006 23:32:23 von murkyd
On 1/18/06, Robert Hicks wrote:
> mark d wrote:
> > On 1/18/06, amonotod@charter.net wrote:
> >> ---- mark d wrote:
> >>> This optimization might have made some sense decades over a serial li=
ne, but
> >>> not today. It is saving 320 single-packet round trips, which is an
> >>> infintesimal amount of traffic.
> >> Well, traffic wise, you're right. But, it also saves parsing all thos=
e rows out of the
> >> DB and through the DBI; this way the only process that has to work wit=
h the data
> >> is the oracle engine. Use DBI where appropriate... Right tool for th=
e job, and all that...
> >
> > I have no idea what "parsing all those rows out of the DB and through t=
he DBI"
> > means, though it sounds like you think row data is copied from the DB t=
o
> > the DBI in the course of a sql DELETE statement (that would be a fundam=
ental
> > misunderstanding of SQL and DBI.)
> >
> > In any case, you're mistaken that the PL/SQL saves anything but a few p=
ackets.
> > To be explicit, the proposed PL/SQL was:
> >
> >> declare
> >> rows_processed number;
> >> batch_size number :=3D 50000;
> >> begin
> >> rows_processed :=3D batch_size;
> >> while rows_processed =3D batch_size loop
> >> delete from table where rownum < batch_size + 1;
> >> rows_processed :=3D SQL%ROWCOUNT;
> >> commit;
> >> end loop;
> >> end;
> >
> > The equivalent DBI code is:
> >
> > $sth =3D $dbh->prepare("delete from TABLE where rownum < 50001");
> > while ( ($deleted =3D $sth->execute) > 0) {
> > printf "deleted $deleted\n";
> >
> >
> > The only difference between the two is a few network packets.
>
> Is that the only difference? I read it, and I could be wrong, that your
> SQL will stop when rownum hits 50001, while his will keep going in
> batches of 50000 until there isn't any more to delete given the delete
> statement.
No. Look more closely. The execute is inside a while loop.
The execute() method returns the number of "affected" rows. For a DELETE,
this means the number of deleted rows. The conditional test inside the loo=
p
checks to see when more than zero rows have been deleted, so it repeats
the delete until zero rows are left.
Here's a real-life sample:
code:
$batch =3D 50001;
$sth =3D $dbh->prepare("delete from SNAFU where rownum < $batch");
printf "Rows in table =3D %d\n",
$dbh->selectall_arrayref("select count(*) from SNAFU")->[0]->[0];
$t =3D time();
while (($deleted =3D $sth->execute) > 0) {
printf "batch %d: deleted %d rows in %f secs\n",
++$i, $deleted, time()-$t;
$t =3D time();
}
Output:
Rows in table =3D 700000
batch 1: deleted 50000 rows in 0.850589 secs
batch 2: deleted 50000 rows in 0.840419 secs
batch 3: deleted 50000 rows in 0.931219 secs
batch 4: deleted 50000 rows in 0.835611 secs
batch 5: deleted 50000 rows in 0.841162 secs
batch 6: deleted 50000 rows in 0.834864 secs
batch 7: deleted 50000 rows in 3.877935 secs
batch 8: deleted 50000 rows in 0.871310 secs
batch 9: deleted 50000 rows in 0.826238 secs
batch 10: deleted 50000 rows in 0.825013 secs
batch 11: deleted 50000 rows in 0.829521 secs
batch 12: deleted 50000 rows in 0.838740 secs
batch 13: deleted 50000 rows in 0.830472 secs
batch 14: deleted 50000 rows in 0.881580 secs
Done
> Yours is a simple delete. I need to break up my deletes into manageable
> sizes, do the delete, do the next batch, etc.
Note that unless you have a meaningful basis for, and definition of,
"manageable",
this is all a complete waste of time anyway. (I originally only
chimed in to note
the pointless "optimization", but I guess I'll comment on the original
question too :)
"large" is highly relative. 16 million is "small" to me. It might be
large for a
MS Access rolodex, but it's certainly not large by any objective measure.
> On the delete side I have about 16 million records that will be deleted.
> I have two of those to do. I have one update that changes a few values
> on about 3.1 million records (not the whole table).
Unless you have good reason to believe it would be a problem, just issue
a single SQL statement and don't waste your time fooling around with
these hacks.
Mark
RE: large updates and deletes
am 19.01.2006 01:29:07 von Andy
> I have been handed a couple of requests to change some data. One is an
> update on 3.6 million records and the other is two different
> deletions of
> over 16 million rows of data. I need to be able to do these
> in batches as I
> am told that Oracle will "blow up" if I do them in one shot.
"Blow up" in what way? Multi-million row deletes and updates are entirely
possible if Oracle is properly configured. More important than the number of
rows is the actual byte size of updates you're making - how much are you
processing? Unless you're pushing into the several gigabyte range, or
there's significant concurrent activity on the same data by other users
whilst you're processing it, or it's a very resource-constrained database,
it's usually possible to do it in one shot. The main thing to consider is
undo space (or rollback space if on an older version of Oracle, or 9i
configured the "old" way).
> I would like to do them in increments of 50,000 then a
> commit...then the
> next 50,000 and then another commit.
Incremental commits carry their own risk - the infamous "snapshot too old",
where undo information required to reconstruct the read-consistent view from
the start of the query is overwritten; assuming you're using a select
statement to work out which rows to update or delete.
> What it is the best way to do this?
As a single statement if possible, only resorting to more complicated
measures if you actually know you need them.
If you're sure you actually do then it depends on how you're determining
which rows to update or delete - whether you have a select query identifying
the rows, or whether you can limit it with "rownum". If you're driving from
a select statement, you'll want to use bulk features - such as BULK COLLECT
in PL/SQL.
--
Andy Hassall :: andy@andyh.co.uk :: http://www.andyh.co.uk
http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
Re: large updates and deletes
am 19.01.2006 01:58:39 von kevindotcar
------=_Part_7236_23637229.1137632319504
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline
Hi all,
I'm not an Oracle person by ANY stretch of any imagination, but with SQL
Server, you can insert the "new" records into a new table, drop the old
table, and re-name the new table- thereby eliminating stuff in the rollback
transaction log.... Dunno if this is at all possible w/ Oracle, just
wondering if it's a valid course of action.
KC
On 1/18/06, Andy Hassall wrote:
>
> > I have been handed a couple of requests to change some data. One is an
> > update on 3.6 million records and the other is two different
> > deletions of
> > over 16 million rows of data. I need to be able to do these
> > in batches as I
> > am told that Oracle will "blow up" if I do them in one shot.
>
> "Blow up" in what way? Multi-million row deletes and updates are entirely
> possible if Oracle is properly configured. More important than the number
> of
> rows is the actual byte size of updates you're making - how much are you
> processing? Unless you're pushing into the several gigabyte range, or
> there's significant concurrent activity on the same data by other users
> whilst you're processing it, or it's a very resource-constrained database=
,
> it's usually possible to do it in one shot. The main thing to consider is
> undo space (or rollback space if on an older version of Oracle, or 9i
> configured the "old" way).
>
> > I would like to do them in increments of 50,000 then a
> > commit...then the
> > next 50,000 and then another commit.
>
> Incremental commits carry their own risk - the infamous "snapshot too
> old",
> where undo information required to reconstruct the read-consistent view
> from
> the start of the query is overwritten; assuming you're using a select
> statement to work out which rows to update or delete.
>
> > What it is the best way to do this?
>
> As a single statement if possible, only resorting to more complicated
> measures if you actually know you need them.
>
> If you're sure you actually do then it depends on how you're determining
> which rows to update or delete - whether you have a select query
> identifying
> the rows, or whether you can limit it with "rownum". If you're driving
> from
> a select statement, you'll want to use bulk features - such as BULK
> COLLECT
> in PL/SQL.
>
> --
> Andy Hassall :: andy@andyh.co.uk :: http://www.andyh.co.uk
> http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
>
>
------=_Part_7236_23637229.1137632319504--
Re: large updates and deletes
am 19.01.2006 02:04:23 von murkyd
On 1/18/06, Kevin Carothers wrote:
> Hi all,
>
> I'm not an Oracle person by ANY stretch of any imagination, but with SQL
> Server, you can insert the "new" records into a new table, drop the old
> table, and re-name the new table- thereby eliminating stuff in the rollba=
ck
> transaction log.... Dunno if this is at all possible w/ Oracle, just
> wondering if it's a valid course of action.
Nothing platform-specific I don't think:
create table FUBAR as select * from SNAFU where ;
drop table SNAFU;
rename SNAFU to FUBAR;
But this only works this easily with trivial data models (no
referential constrainsts, etc)
Mark
Re: large updates and deletes
am 19.01.2006 03:15:00 von sigzero
Andy Hassall wrote:
>> I have been handed a couple of requests to change some data. One is an
>> update on 3.6 million records and the other is two different
>> deletions of
>> over 16 million rows of data. I need to be able to do these
>> in batches as I
>> am told that Oracle will "blow up" if I do them in one shot.
>
> "Blow up" in what way? Multi-million row deletes and updates are entirely
> possible if Oracle is properly configured. More important than the number of
> rows is the actual byte size of updates you're making - how much are you
> processing? Unless you're pushing into the several gigabyte range, or
> there's significant concurrent activity on the same data by other users
> whilst you're processing it, or it's a very resource-constrained database,
> it's usually possible to do it in one shot. The main thing to consider is
> undo space (or rollback space if on an older version of Oracle, or 9i
> configured the "old" way).
>
I am not sure as I am not a DBA but my guess is that is done in the
"old" way. We have it configured according to the 3rd party application
specs and know the 3rd party like I do...it wouldn't surprise me at all.
>> I would like to do them in increments of 50,000 then a
>> commit...then the
>> next 50,000 and then another commit.
>
> Incremental commits carry their own risk - the infamous "snapshot too old",
> where undo information required to reconstruct the read-consistent view from
> the start of the query is overwritten; assuming you're using a select
> statement to work out which rows to update or delete.
>
>> What it is the best way to do this?
>
> As a single statement if possible, only resorting to more complicated
> measures if you actually know you need them.
>
> If you're sure you actually do then it depends on how you're determining
> which rows to update or delete - whether you have a select query identifying
> the rows, or whether you can limit it with "rownum". If you're driving from
> a select statement, you'll want to use bulk features - such as BULK COLLECT
> in PL/SQL.
>
> --
> Andy Hassall :: andy@andyh.co.uk :: http://www.andyh.co.uk
> http://www.andyhsoftware.co.uk/space :: disk and FTP usage analysis tool
>
Thanks for the advice.
Robert
Re: large updates and deletes
am 20.01.2006 01:03:48 von lembark
-- Robert Hicks
> I have been handed a couple of requests to change some data. One is an
> update on 3.6 million records and the other is two different deletions of
> over 16 million rows of data. I need to be able to do these in batches as
> I am told that Oracle will "blow up" if I do them in one shot.
The issue is rollback/redo space. If you have too large
of a transaction pending you can run out of space and
it abort the transaction. You may have sufficient space
to blow off this many records, but it'll be a True Pain
to find out the hard way...
> I would like to do them in increments of 50,000 then a commit...then the
> next 50,000 and then another commit.
If you are updating the records using their primary
key then just use
for( @updatz )
{
...
$dbi->commit unless ++i % 10; # every 10th update is a commit.
}
For the deletes, fry them in blocks based on a count.
I've forgotten if Oracle supports delete ... where count
< X to delete only that many rows.
If there is a surrogate (i.e., integer key) use that instead
of the rowid:
my $dbh
= DBI->connect
(
'dbi:Oracle:blah:blah',
$user,
$pass,
{
AutoCommit => 1,
RaiseError => 1,
}
);
my $sth
= $dbh->prepare
(
delete from
table
where
rowid < ?
and
whatever_else
);
for( my $i = 0 ; $i < $table_rows ; $i += $max_deletes )
{
$sth->execute( $i );
}
__END__
--
Steven Lembark 85-09 90th Street
Workhorse Computing Woodhaven, NY 11421
lembark@wrkhors.com 1 888 359 3508
Re: large updates and deletes
am 20.01.2006 14:33:26 von sigzero
mark d wrote:
> On 1/18/06, Robert Hicks wrote:
>> mark d wrote:
>>> On 1/18/06, amonotod@charter.net wrote:
>>>> ---- mark d wrote:
>>>>> This optimization might have made some sense decades over a serial line, but
>>>>> not today. It is saving 320 single-packet round trips, which is an
>>>>> infintesimal amount of traffic.
>>>> Well, traffic wise, you're right. But, it also saves parsing all those rows out of the
>>>> DB and through the DBI; this way the only process that has to work with the data
>>>> is the oracle engine. Use DBI where appropriate... Right tool for the job, and all that...
>>> I have no idea what "parsing all those rows out of the DB and through the DBI"
>>> means, though it sounds like you think row data is copied from the DB to
>>> the DBI in the course of a sql DELETE statement (that would be a fundamental
>>> misunderstanding of SQL and DBI.)
>>>
>>> In any case, you're mistaken that the PL/SQL saves anything but a few packets.
>>> To be explicit, the proposed PL/SQL was:
>>>
>>>> declare
>>>> rows_processed number;
>>>> batch_size number := 50000;
>>>> begin
>>>> rows_processed := batch_size;
>>>> while rows_processed = batch_size loop
>>>> delete from table where rownum < batch_size + 1;
>>>> rows_processed := SQL%ROWCOUNT;
>>>> commit;
>>>> end loop;
>>>> end;
>>> The equivalent DBI code is:
>>>
>>> $sth = $dbh->prepare("delete from TABLE where rownum < 50001");
>>> while ( ($deleted = $sth->execute) > 0) {
>>> printf "deleted $deleted\n";
>>>
>>>
>>> The only difference between the two is a few network packets.
>> Is that the only difference? I read it, and I could be wrong, that your
>> SQL will stop when rownum hits 50001, while his will keep going in
>> batches of 50000 until there isn't any more to delete given the delete
>> statement.
>
> No. Look more closely. The execute is inside a while loop.
>
> The execute() method returns the number of "affected" rows. For a DELETE,
> this means the number of deleted rows. The conditional test inside the loop
> checks to see when more than zero rows have been deleted, so it repeats
> the delete until zero rows are left.
>
> Here's a real-life sample:
>
> code:
>
> $batch = 50001;
>
> $sth = $dbh->prepare("delete from SNAFU where rownum < $batch");
>
> printf "Rows in table = %d\n",
> $dbh->selectall_arrayref("select count(*) from SNAFU")->[0]->[0];
>
> $t = time();
> while (($deleted = $sth->execute) > 0) {
>
> printf "batch %d: deleted %d rows in %f secs\n",
> ++$i, $deleted, time()-$t;
>
> $t = time();
> }
>
>
> Output:
>
>
> Rows in table = 700000
> batch 1: deleted 50000 rows in 0.850589 secs
> batch 2: deleted 50000 rows in 0.840419 secs
> batch 3: deleted 50000 rows in 0.931219 secs
> batch 4: deleted 50000 rows in 0.835611 secs
> batch 5: deleted 50000 rows in 0.841162 secs
> batch 6: deleted 50000 rows in 0.834864 secs
> batch 7: deleted 50000 rows in 3.877935 secs
> batch 8: deleted 50000 rows in 0.871310 secs
> batch 9: deleted 50000 rows in 0.826238 secs
> batch 10: deleted 50000 rows in 0.825013 secs
> batch 11: deleted 50000 rows in 0.829521 secs
> batch 12: deleted 50000 rows in 0.838740 secs
> batch 13: deleted 50000 rows in 0.830472 secs
> batch 14: deleted 50000 rows in 0.881580 secs
> Done
>
>
>
>> Yours is a simple delete. I need to break up my deletes into manageable
>> sizes, do the delete, do the next batch, etc.
>
> Note that unless you have a meaningful basis for, and definition of,
> "manageable",
> this is all a complete waste of time anyway. (I originally only
> chimed in to note
> the pointless "optimization", but I guess I'll comment on the original
> question too :)
>
> "large" is highly relative. 16 million is "small" to me. It might be
> large for a
> MS Access rolodex, but it's certainly not large by any objective measure.
>
>> On the delete side I have about 16 million records that will be deleted.
>> I have two of those to do. I have one update that changes a few values
>> on about 3.1 million records (not the whole table).
>
> Unless you have good reason to believe it would be a problem, just issue
> a single SQL statement and don't waste your time fooling around with
> these hacks.
>
That is probably what "I" would do but my DBA is a little jittery and
wants it broken up. :)
I do appreciate your comments though.
Robert
Re: large updates and deletes
am 21.01.2006 03:41:56 von sigzero
Steven Lembark wrote:
>
> -- Robert Hicks
>
>> I have been handed a couple of requests to change some data. One is an
>> update on 3.6 million records and the other is two different deletions of
>> over 16 million rows of data. I need to be able to do these in batches as
>> I am told that Oracle will "blow up" if I do them in one shot.
>
> The issue is rollback/redo space. If you have too large
> of a transaction pending you can run out of space and
> it abort the transaction. You may have sufficient space
> to blow off this many records, but it'll be a True Pain
> to find out the hard way...
>
The limit...10,000 and he had to increase the rollback space or some
such to get it to work right.
Robert
Re: large updates and deletes
am 25.01.2006 20:45:41 von lembark
-- Robert Hicks
> Steven Lembark wrote:
>>
>> -- Robert Hicks
>>
>>> I have been handed a couple of requests to change some data. One is an
>>> update on 3.6 million records and the other is two different deletions
>>> of over 16 million rows of data. I need to be able to do these in
>>> batches as I am told that Oracle will "blow up" if I do them in one
>>> shot.
>>
>> The issue is rollback/redo space. If you have too large
>> of a transaction pending you can run out of space and
>> it abort the transaction. You may have sufficient space
>> to blow off this many records, but it'll be a True Pain
>> to find out the hard way...
>>
>
> The limit...10,000 and he had to increase the rollback space or some such
> to get it to work right.
One approach is to select the keys to be updated into a
temporary table, then update the scratch table with new
values, use a join to udpate the records with a count < x
or rowid < x on the scrath space. After the update succeeds
you update a "finished" flag.
This also has the advantage of bookkeping what was done.
Depending on how important the data is you might want to
select the existing records into scratch space, update
the scratch records with 'new value' fields and use a
nested select to udpate the new table records. Net result
is a running history of what was updated when with the
old and new values selectable.
--
Steven Lembark 85-09 90th Street
Workhorse Computing Woodhaven, NY 11421
lembark@wrkhors.com 1 888 359 3508