Transaction Isolation Level
Transaction Isolation Level
am 03.11.2007 11:24:11 von dhek
Hi,
I have 1 SQL statement selecting data from various tables and updating other
tables.
The question then is how do I prevent other applications from modifying the
tables that I'm working on (that is while my transaction is being executed)?
I know that the isolation level should be either REPEATABLE READ or
SERIALIZABLE. But I need confirmation on if one of these actually solve my
issue - prevents other applications/threads from modifying/inserting data
into the same tables that I'm working on.
Thanks in advance,
Daniel
Re: Transaction Isolation Level
am 03.11.2007 11:47:17 von Erland Sommarskog
dhek (dhek@dhek.dk) writes:
> I have 1 SQL statement selecting data from various tables and updating
> other tables.
>
> The question then is how do I prevent other applications from modifying
> the tables that I'm working on (that is while my transaction is being
> executed)?
>
>
> I know that the isolation level should be either REPEATABLE READ or
> SERIALIZABLE. But I need confirmation on if one of these actually solve my
> issue - prevents other applications/threads from modifying/inserting data
> into the same tables that I'm working on.
It's difficult to give a single answer, since I don't know your exact
requirements, so I have to answer in genric terms.
If you want a consistent snapshot of how the data looks in this precise
moment, the isolation level you should use is snapshot isolation. Snapshot
isolation is available only in SQL 2005 and later. Furthermore the database
must be configured to permit snapshot isolation. When you have snapshot is
created when the transaction starts, or at latest when you start to read
data. If data is updated while your query runs, you will not see these
updates. This gives you a consistent view - but it may also give you
outdated data, depending on how you look at it.
On SQL 2000, snapshot isolation is not available, and the only foolproof
way to get consistent data, is to set the database in single-user mode.
In the default isolation level, READ COMMITTED, if you read the same
row twice, you may get different results in different accesses. For
instance, if you run:
SELECT O.OrderID, E.EmployeeID, E.LastName
FROM Orders O
JOIN Employees E ON O.EmployeeID = E.EmployeeID
You may see different last names for the same employee ID if the query
plan uses a loop join, and the last name is updated while the query is
running.
In the next level, REPEATABLE READ locks are held, and you are guaranteed
that reading the same row twice will yield the same result. However, if
the last name of employee 8 was Grønkjær when the query started, and
updated to Gravesen before you have read any orders with employee 8,
you would see Gravesen in the result set.
SERIALIZABLE adds protection against "phantom insert", so if you read the
same range twice, you will get the same result. That is, if you run
SELECT MAX(OrderID) FROM Orders twice in the same serializable transaction,
you will get the same result. But if a order is added after you started the
transaction, but before your query runs, the order will show up.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
Re: Transaction Isolation Level
am 03.11.2007 12:32:45 von dhek
Well, my issue is that I'm reading data from tables A, B, C and D and
updates table E.
What I need to prevent is 2 things:
1) New rows must not be added to either table A and B while my transaction
is executing
2) Existing rows must not be modified
My query is executed on a SQL server 2005. I do not have the option to
change the configuration of it.
Am I supposed to use table locks in my query
SELECT A.c1, B.c1, C.c1, D.d1
FROM ...
WHERE ...
WITH (TABLOCK, UPDLOCK, HOLDLOCK)
UPDATE E.1
SET ...
WHERE ...
"Erland Sommarskog" wrote in message
news:Xns99DD791FBE224Yazorman@127.0.0.1...
> dhek (dhek@dhek.dk) writes:
>> I have 1 SQL statement selecting data from various tables and updating
>> other tables.
>>
>> The question then is how do I prevent other applications from modifying
>> the tables that I'm working on (that is while my transaction is being
>> executed)?
>>
>>
>> I know that the isolation level should be either REPEATABLE READ or
>> SERIALIZABLE. But I need confirmation on if one of these actually solve
>> my
>> issue - prevents other applications/threads from modifying/inserting data
>> into the same tables that I'm working on.
>
> It's difficult to give a single answer, since I don't know your exact
> requirements, so I have to answer in genric terms.
>
> If you want a consistent snapshot of how the data looks in this precise
> moment, the isolation level you should use is snapshot isolation. Snapshot
> isolation is available only in SQL 2005 and later. Furthermore the
> database
> must be configured to permit snapshot isolation. When you have snapshot is
> created when the transaction starts, or at latest when you start to read
> data. If data is updated while your query runs, you will not see these
> updates. This gives you a consistent view - but it may also give you
> outdated data, depending on how you look at it.
>
> On SQL 2000, snapshot isolation is not available, and the only foolproof
> way to get consistent data, is to set the database in single-user mode.
>
> In the default isolation level, READ COMMITTED, if you read the same
> row twice, you may get different results in different accesses. For
> instance, if you run:
>
> SELECT O.OrderID, E.EmployeeID, E.LastName
> FROM Orders O
> JOIN Employees E ON O.EmployeeID = E.EmployeeID
>
> You may see different last names for the same employee ID if the query
> plan uses a loop join, and the last name is updated while the query is
> running.
>
> In the next level, REPEATABLE READ locks are held, and you are guaranteed
> that reading the same row twice will yield the same result. However, if
> the last name of employee 8 was Grønkjær when the query started, and
> updated to Gravesen before you have read any orders with employee 8,
> you would see Gravesen in the result set.
>
> SERIALIZABLE adds protection against "phantom insert", so if you read the
> same range twice, you will get the same result. That is, if you run
> SELECT MAX(OrderID) FROM Orders twice in the same serializable
> transaction,
> you will get the same result. But if a order is added after you started
> the
> transaction, but before your query runs, the order will show up.
>
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
Re: Transaction Isolation Level
am 03.11.2007 23:13:30 von Erland Sommarskog
dhek (dhek@dhek.dk) writes:
> Well, my issue is that I'm reading data from tables A, B, C and D and
> updates table E.
>
> What I need to prevent is 2 things:
>
> 1) New rows must not be added to either table A and B while my transaction
> is executing
> 2) Existing rows must not be modified
I'm not sure that I get this. Assuming you assemble data into a temp
table or a table variable, and you batch goes:
INSERT #tmp(...)
SELECT ....
FROM A, B
...
INSERT #tmp(...)
SELECT ...
FROM C, D
UPDATE E
SET ...
FROM E
JOIN #tmp...
Why would it be an issue if some adds or modifies rows into A or B
once you have run that SELECT statement? I can possibly understand
that you don't want permit rows to be added or modified in C or D while
you are reading A and B. But once you have read A or B, it cannot matter
if modifications happens while your transaction is running, or if they
are held up until your transaction completes.
> My query is executed on a SQL server 2005. I do not have the option to
> change the configuration of it.
Well, if you want to read that is consistent at a certain moment in
time, snapshot isolation is your only foolproof option. It also has the
advantage of not blocking updates.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
Re: Transaction Isolation Level
am 04.11.2007 03:02:36 von mooregr_deleteth1s
"dhek" wrote in message
news:472c5c63$0$90274$14726298@news.sunsite.dk...
> Well, my issue is that I'm reading data from tables A, B, C and D and
> updates table E.
>
> What I need to prevent is 2 things:
>
> 1) New rows must not be added to either table A and B while my transaction
> is executing
> 2) Existing rows must not be modified
Why?
If that is truly the case, then yes, you need a table lock.
But this seems like a fairly unusual requirement. Sure your design is
really what you want?
>
> My query is executed on a SQL server 2005. I do not have the option to
> change the configuration of it.
>
> Am I supposed to use table locks in my query
>
> SELECT A.c1, B.c1, C.c1, D.d1
> FROM ...
> WHERE ...
> WITH (TABLOCK, UPDLOCK, HOLDLOCK)
>
> UPDATE E.1
> SET ...
> WHERE ...
>
>
>
> "Erland Sommarskog" wrote in message
> news:Xns99DD791FBE224Yazorman@127.0.0.1...
>> dhek (dhek@dhek.dk) writes:
>>> I have 1 SQL statement selecting data from various tables and updating
>>> other tables.
>>>
>>> The question then is how do I prevent other applications from modifying
>>> the tables that I'm working on (that is while my transaction is being
>>> executed)?
>>>
>>>
>>> I know that the isolation level should be either REPEATABLE READ or
>>> SERIALIZABLE. But I need confirmation on if one of these actually solve
>>> my
>>> issue - prevents other applications/threads from modifying/inserting
>>> data
>>> into the same tables that I'm working on.
>>
>> It's difficult to give a single answer, since I don't know your exact
>> requirements, so I have to answer in genric terms.
>>
>> If you want a consistent snapshot of how the data looks in this precise
>> moment, the isolation level you should use is snapshot isolation.
>> Snapshot
>> isolation is available only in SQL 2005 and later. Furthermore the
>> database
>> must be configured to permit snapshot isolation. When you have snapshot
>> is
>> created when the transaction starts, or at latest when you start to read
>> data. If data is updated while your query runs, you will not see these
>> updates. This gives you a consistent view - but it may also give you
>> outdated data, depending on how you look at it.
>>
>> On SQL 2000, snapshot isolation is not available, and the only foolproof
>> way to get consistent data, is to set the database in single-user mode.
>>
>> In the default isolation level, READ COMMITTED, if you read the same
>> row twice, you may get different results in different accesses. For
>> instance, if you run:
>>
>> SELECT O.OrderID, E.EmployeeID, E.LastName
>> FROM Orders O
>> JOIN Employees E ON O.EmployeeID = E.EmployeeID
>>
>> You may see different last names for the same employee ID if the query
>> plan uses a loop join, and the last name is updated while the query is
>> running.
>>
>> In the next level, REPEATABLE READ locks are held, and you are guaranteed
>> that reading the same row twice will yield the same result. However, if
>> the last name of employee 8 was Grønkjær when the query started, and
>> updated to Gravesen before you have read any orders with employee 8,
>> you would see Gravesen in the result set.
>>
>> SERIALIZABLE adds protection against "phantom insert", so if you read the
>> same range twice, you will get the same result. That is, if you run
>> SELECT MAX(OrderID) FROM Orders twice in the same serializable
>> transaction,
>> you will get the same result. But if a order is added after you started
>> the
>> transaction, but before your query runs, the order will show up.
>>
>>
>> --
>> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>>
>> Books Online for SQL Server 2005 at
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
>> Books Online for SQL Server 2000 at
>> http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
>
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
Re: Transaction Isolation Level
am 04.11.2007 10:04:06 von dhek
ok, let me try to be more clear.
The thing is I have a database containing various types of information - for
instance orders.
Instead of letting each record in an order contain a timestamp indicating
the datatime the record was replicated to another system, I have a table
containing replication times for various types of objects.
This means that while I initiate a transaction selecting all new orders it
is possible for others to add other new orders to the database which I will
never be aware of.
- in my transaction after having gotten new orders I set a timestamp
indicating the datetime of my last replication. If someone has added new
records meanwhile I executed my select statement and performed the update
statement, then these new records will never be replicated because their
timestamp will be less than than the one I set in my update-statement when
finished.
Hope this clearfies my issue.
"dhek" wrote in message
news:472c5c63$0$90274$14726298@news.sunsite.dk...
> Well, my issue is that I'm reading data from tables A, B, C and D and
> updates table E.
>
> What I need to prevent is 2 things:
>
> 1) New rows must not be added to either table A and B while my transaction
> is executing
> 2) Existing rows must not be modified
>
> My query is executed on a SQL server 2005. I do not have the option to
> change the configuration of it.
>
> Am I supposed to use table locks in my query
>
> SELECT A.c1, B.c1, C.c1, D.d1
> FROM ...
> WHERE ...
> WITH (TABLOCK, UPDLOCK, HOLDLOCK)
>
> UPDATE E.1
> SET ...
> WHERE ...
>
>
>
> "Erland Sommarskog" wrote in message
> news:Xns99DD791FBE224Yazorman@127.0.0.1...
>> dhek (dhek@dhek.dk) writes:
>>> I have 1 SQL statement selecting data from various tables and updating
>>> other tables.
>>>
>>> The question then is how do I prevent other applications from modifying
>>> the tables that I'm working on (that is while my transaction is being
>>> executed)?
>>>
>>>
>>> I know that the isolation level should be either REPEATABLE READ or
>>> SERIALIZABLE. But I need confirmation on if one of these actually solve
>>> my
>>> issue - prevents other applications/threads from modifying/inserting
>>> data
>>> into the same tables that I'm working on.
>>
>> It's difficult to give a single answer, since I don't know your exact
>> requirements, so I have to answer in genric terms.
>>
>> If you want a consistent snapshot of how the data looks in this precise
>> moment, the isolation level you should use is snapshot isolation.
>> Snapshot
>> isolation is available only in SQL 2005 and later. Furthermore the
>> database
>> must be configured to permit snapshot isolation. When you have snapshot
>> is
>> created when the transaction starts, or at latest when you start to read
>> data. If data is updated while your query runs, you will not see these
>> updates. This gives you a consistent view - but it may also give you
>> outdated data, depending on how you look at it.
>>
>> On SQL 2000, snapshot isolation is not available, and the only foolproof
>> way to get consistent data, is to set the database in single-user mode.
>>
>> In the default isolation level, READ COMMITTED, if you read the same
>> row twice, you may get different results in different accesses. For
>> instance, if you run:
>>
>> SELECT O.OrderID, E.EmployeeID, E.LastName
>> FROM Orders O
>> JOIN Employees E ON O.EmployeeID = E.EmployeeID
>>
>> You may see different last names for the same employee ID if the query
>> plan uses a loop join, and the last name is updated while the query is
>> running.
>>
>> In the next level, REPEATABLE READ locks are held, and you are guaranteed
>> that reading the same row twice will yield the same result. However, if
>> the last name of employee 8 was Grønkjær when the query started, and
>> updated to Gravesen before you have read any orders with employee 8,
>> you would see Gravesen in the result set.
>>
>> SERIALIZABLE adds protection against "phantom insert", so if you read the
>> same range twice, you will get the same result. That is, if you run
>> SELECT MAX(OrderID) FROM Orders twice in the same serializable
>> transaction,
>> you will get the same result. But if a order is added after you started
>> the
>> transaction, but before your query runs, the order will show up.
>>
>>
>> --
>> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>>
>> Books Online for SQL Server 2005 at
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
>> Books Online for SQL Server 2000 at
>> http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
>
>
Re: Transaction Isolation Level
am 04.11.2007 10:17:20 von dhek
"Erland Sommarskog" wrote in message
news:Xns99DDED79C3A76Yazorman@127.0.0.1...
> dhek (dhek@dhek.dk) writes:
>> Well, my issue is that I'm reading data from tables A, B, C and D and
>> updates table E.
>>
>> What I need to prevent is 2 things:
>>
>> 1) New rows must not be added to either table A and B while my
>> transaction
>> is executing
>> 2) Existing rows must not be modified
>
> I'm not sure that I get this. Assuming you assemble data into a temp
> table or a table variable, and you batch goes:
>
> INSERT #tmp(...)
> SELECT ....
> FROM A, B
> ...
>
> INSERT #tmp(...)
> SELECT ...
> FROM C, D
>
> UPDATE E
> SET ...
> FROM E
> JOIN #tmp...
As mentioned I perform a SELECT statement followed by an UPDATE-statement in
my transaction.
When I get new Orders and related information from various tables I need to
update a synchronization-table indicating the time of last sync.
My problem is that with this setup I need to prevent others from adding new
orders to the table while my transaction is executing to prevent "lost
records".
> Why would it be an issue if some adds or modifies rows into A or B
> once you have run that SELECT statement? I can possibly understand
> that you don't want permit rows to be added or modified in C or D while
> you are reading A and B. But once you have read A or B, it cannot matter
> if modifications happens while your transaction is running, or if they
> are held up until your transaction completes.
With the setup u proposed in your example I agree - it would be foolish not
to allow others to read/write data to table A,B when done with those
table:-)
If my sync-indicator had only been located in the records that are selected
I would not have a problem at all - only need to set the transaction level
to READ COMMITED or SYNCHRONIZED. I do not have the option of setting it to
snapshot isolation (not supported by my application.)
>> My query is executed on a SQL server 2005. I do not have the option to
>> change the configuration of it.
>
> Well, if you want to read that is consistent at a certain moment in
> time, snapshot isolation is your only foolproof option. It also has the
> advantage of not blocking updates.
Do u mean is allows for records to be modified or is it table u refer to?
Re: Transaction Isolation Level
am 04.11.2007 10:20:53 von dhek
>> Well, my issue is that I'm reading data from tables A, B, C and D and
>> updates table E.
>>
>> What I need to prevent is 2 things:
>>
>> 1) New rows must not be added to either table A and B while my
>> transaction is executing
>> 2) Existing rows must not be modified
>
> Why?
>
> If that is truly the case, then yes, you need a table lock.
>
> But this seems like a fairly unusual requirement. Sure your design is
> really what you want?
With the current setup in the database I fear this is what I want (though
I'm not very keen on the idea either:-o). I'm just trying to investigate my
options for disallowing tables changes while my transaction is executing to
prevent "lost records" as mentioned in one of my other replies.
Re: Transaction Isolation Level
am 04.11.2007 10:30:35 von dhek
> With the setup u proposed in your example I agree - it would be foolish
> not to allow others to read/write data to table A,B when done with those
> table:-)
> If my sync-indicator had only been located in the records that are
> selected I would not have a problem at all - only need to set the
> transaction level to READ COMMITED or SYNCHRONIZED. I do not have the
> option of setting it to snapshot isolation (not supported by my
> application.)
Sorry, I of course ment REPEATABLE READ and not READ COMMITED.
Re: Transaction Isolation Level
am 04.11.2007 13:27:01 von dhek
> I know that the isolation level should be either REPEATABLE READ or
> SERIALIZABLE. But I need confirmation on if one of these actually solve my
> issue - prevents other applications/threads from modifying/inserting data
> into the same tables that I'm working on.
When reading about SERIALIZABLE in
http://en.wikipedia.org/wiki/Isolation_(computer_science) I get the
understanding that the objects involved in a SELECT-statement (that is all
coloumns involved i vaious tables) will be locked. In essence this would
render it impossible to insert new records into the tables used in my
SELECT-statement which I believe is exaclty what I need.
Is this understanding correct?
Re: Transaction Isolation Level
am 04.11.2007 14:11:44 von mooregr_deleteth1s
"dhek" wrote in message
news:472d8b03$0$90272$14726298@news.sunsite.dk...
> ok, let me try to be more clear.
>
> The thing is I have a database containing various types of information -
> for instance orders.
> Instead of letting each record in an order contain a timestamp indicating
> the datatime the record was replicated to another system, I have a table
> containing replication times for various types of objects.
Why not add such a column with a timestamp then?
>
> This means that while I initiate a transaction selecting all new orders it
> is possible for others to add other new orders to the database which I
> will never be aware of.
> - in my transaction after having gotten new orders I set a timestamp
> indicating the datetime of my last replication. If someone has added new
> records meanwhile I executed my select statement and performed the update
> statement, then these new records will never be replicated because their
> timestamp will be less than than the one I set in my update-statement when
> finished.
If you're simply looking to do replication, use SQL Server's built in
replication functionality.
It may save you a lot of effort.
>
> Hope this clearfies my issue.
>
>
>
> "dhek" wrote in message
> news:472c5c63$0$90274$14726298@news.sunsite.dk...
>> Well, my issue is that I'm reading data from tables A, B, C and D and
>> updates table E.
>>
>> What I need to prevent is 2 things:
>>
>> 1) New rows must not be added to either table A and B while my
>> transaction is executing
>> 2) Existing rows must not be modified
>>
>> My query is executed on a SQL server 2005. I do not have the option to
>> change the configuration of it.
>>
>> Am I supposed to use table locks in my query
>>
>> SELECT A.c1, B.c1, C.c1, D.d1
>> FROM ...
>> WHERE ...
>> WITH (TABLOCK, UPDLOCK, HOLDLOCK)
>>
>> UPDATE E.1
>> SET ...
>> WHERE ...
>>
>>
>>
>> "Erland Sommarskog" wrote in message
>> news:Xns99DD791FBE224Yazorman@127.0.0.1...
>>> dhek (dhek@dhek.dk) writes:
>>>> I have 1 SQL statement selecting data from various tables and updating
>>>> other tables.
>>>>
>>>> The question then is how do I prevent other applications from modifying
>>>> the tables that I'm working on (that is while my transaction is being
>>>> executed)?
>>>>
>>>>
>>>> I know that the isolation level should be either REPEATABLE READ or
>>>> SERIALIZABLE. But I need confirmation on if one of these actually solve
>>>> my
>>>> issue - prevents other applications/threads from modifying/inserting
>>>> data
>>>> into the same tables that I'm working on.
>>>
>>> It's difficult to give a single answer, since I don't know your exact
>>> requirements, so I have to answer in genric terms.
>>>
>>> If you want a consistent snapshot of how the data looks in this precise
>>> moment, the isolation level you should use is snapshot isolation.
>>> Snapshot
>>> isolation is available only in SQL 2005 and later. Furthermore the
>>> database
>>> must be configured to permit snapshot isolation. When you have snapshot
>>> is
>>> created when the transaction starts, or at latest when you start to read
>>> data. If data is updated while your query runs, you will not see these
>>> updates. This gives you a consistent view - but it may also give you
>>> outdated data, depending on how you look at it.
>>>
>>> On SQL 2000, snapshot isolation is not available, and the only foolproof
>>> way to get consistent data, is to set the database in single-user mode.
>>>
>>> In the default isolation level, READ COMMITTED, if you read the same
>>> row twice, you may get different results in different accesses. For
>>> instance, if you run:
>>>
>>> SELECT O.OrderID, E.EmployeeID, E.LastName
>>> FROM Orders O
>>> JOIN Employees E ON O.EmployeeID = E.EmployeeID
>>>
>>> You may see different last names for the same employee ID if the query
>>> plan uses a loop join, and the last name is updated while the query is
>>> running.
>>>
>>> In the next level, REPEATABLE READ locks are held, and you are
>>> guaranteed
>>> that reading the same row twice will yield the same result. However, if
>>> the last name of employee 8 was Grønkjær when the query started, and
>>> updated to Gravesen before you have read any orders with employee 8,
>>> you would see Gravesen in the result set.
>>>
>>> SERIALIZABLE adds protection against "phantom insert", so if you read
>>> the
>>> same range twice, you will get the same result. That is, if you run
>>> SELECT MAX(OrderID) FROM Orders twice in the same serializable
>>> transaction,
>>> you will get the same result. But if a order is added after you started
>>> the
>>> transaction, but before your query runs, the order will show up.
>>>
>>>
>>> --
>>> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>>>
>>> Books Online for SQL Server 2005 at
>>> http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
>>> Books Online for SQL Server 2000 at
>>> http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
>>
>>
>
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html
Re: Transaction Isolation Level
am 04.11.2007 15:11:19 von dhek
>> The thing is I have a database containing various types of information -
>> for instance orders.
>> Instead of letting each record in an order contain a timestamp indicating
>> the datatime the record was replicated to another system, I have a table
>> containing replication times for various types of objects.
>
>
> Why not add such a column with a timestamp then?
Yes, I'd love to - but its not just up to me, since I don't have ownership
of the database.
>> This means that while I initiate a transaction selecting all new orders
>> it is possible for others to add other new orders to the database which I
>> will never be aware of.
>> - in my transaction after having gotten new orders I set a timestamp
>> indicating the datetime of my last replication. If someone has added new
>> records meanwhile I executed my select statement and performed the update
>> statement, then these new records will never be replicated because their
>> timestamp will be less than than the one I set in my update-statement
>> when finished.
>
> If you're simply looking to do replication, use SQL Server's built in
> replication functionality.
>
> It may save you a lot of effort.
Well, I'm not sure what the possibilities are really with regards to
replication from SQL server. I doubt that it is usefull in my case, since I
pull data from an SAP system.
Re: Transaction Isolation Level
am 04.11.2007 16:50:10 von mooregr_deleteth1s
"dhek" wrote in message
news:472dd302$0$90265$14726298@news.sunsite.dk...
>>> The thing is I have a database containing various types of information -
>>> for instance orders.
>>> Instead of letting each record in an order contain a timestamp
>>> indicating the datatime the record was replicated to another system, I
>>> have a table containing replication times for various types of objects.
>>
>>
>> Why not add such a column with a timestamp then?
>
> Yes, I'd love to - but its not just up to me, since I don't have ownership
> of the database.
I'm confused then. If you don't have ownership of the DB, how can you
expect to set transaction levels and locking?
>
> Well, I'm not sure what the possibilities are really with regards to
> replication from SQL server. I doubt that it is usefull in my case, since
> I pull data from an SAP system.
Then I'm doubly confused.
>
Re: Transaction Isolation Level
am 04.11.2007 18:38:30 von Erland Sommarskog
dhek (dhek@dhek.dk) writes:
> The thing is I have a database containing various types of information -
> for instance orders.
> Instead of letting each record in an order contain a timestamp indicating
> the datatime the record was replicated to another system, I have a table
> containing replication times for various types of objects.
>
> This means that while I initiate a transaction selecting all new orders
> it is possible for others to add other new orders to the database which
> I will never be aware of. - in my transaction after having gotten new
> orders I set a timestamp indicating the datetime of my last replication.
Just to make things clear: is this the timestamp data type, or a datetime
value? (It sounds like the latter, but I want to be sure.)
From this description, it sounds that SERIALIZABLE would do the job, but
I think there is a better solution. Do this:
SELECT @mynewtimestamp = getdate()
SELECT ... FROM tbl
WHERE regdate BETWEEN @myoldtimestamp AND @mytimestaamp
By first determining the upper limit of the span you will read,
there is no problem with rows that are added later, because they will
be outside your range.
I would strongly recommend against using serializable, because if
conflicting updates really is a seroius possibility, using serializable
is likely to wreak havoc with the concurrency of the system. Your
replication may work, but if people can place orders while your
replication is running, that can be a serious problem.
An interesting observation is that as I understand you scenario is
that snapshot isolation would *not* work, despite what I said before.
With snapshot you would miss rows that had been inserted when you started
reading, but which had not been committed.
This shows just how important it is to give a completely picture of the
problem you are trying to solve.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downlo ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books .mspx
Re: Transaction Isolation Level
am 04.11.2007 20:55:52 von dhek
> Just to make things clear: is this the timestamp data type, or a datetime
> value? (It sounds like the latter, but I want to be sure.)
this is a datetime value obtained by the function GETUTCTIME()
> From this description, it sounds that SERIALIZABLE would do the job, but
> I think there is a better solution. Do this:
>
> SELECT @mynewtimestamp = getdate()
>
> SELECT ... FROM tbl
> WHERE regdate BETWEEN @myoldtimestamp AND @mytimestaamp
>
> By first determining the upper limit of the span you will read,
> there is no problem with rows that are added later, because they will
> be outside your range.
I most definitely agree with u - its so simple and so right. I does however
still leave a small time window for the problem to still exist, but I guess
we're now dealing with a more theoretical issue than one that is likely to
actually occur.
In any case - this model should be used:-)
> I would strongly recommend against using serializable, because if
> conflicting updates really is a seroius possibility, using serializable
> is likely to wreak havoc with the concurrency of the system. Your
> replication may work, but if people can place orders while your
> replication is running, that can be a serious problem.
I agree - this is also my fear and is exactly why I didn't really felt too
happy about this solution. But I do believe it would also solve my issue,
since it would disallow for new data to be added that conflict with the
dataset at hand (my where-clause in the SELECT-statement part).
> An interesting observation is that as I understand you scenario is
> that snapshot isolation would *not* work, despite what I said before.
> With snapshot you would miss rows that had been inserted when you started
> reading, but which had not been committed.
>
> This shows just how important it is to give a completely picture of the
> problem you are trying to solve.
hehe, point taken:-)
Re: Transaction Isolation Level
am 04.11.2007 21:02:02 von dhek
>>> Why not add such a column with a timestamp then?
>>
>> Yes, I'd love to - but its not just up to me, since I don't have
>> ownership of the database.
>
> I'm confused then. If you don't have ownership of the DB, how can you
> expect to set transaction levels and locking?
I cannot modify the configuration of the database - this also includes
tables and their design. I can however change my statements all I want and
for instance do
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION trans1
SELECT ...
FROM ...
WITH (TABLOCK, HOLDLOCK, UPDLOCK)
UPDATE ...
SET ...
WHERE ...
COMMIT TRANSACTION trans1
which affects transactions and locking in the database.
>>
>> Well, I'm not sure what the possibilities are really with regards to
>> replication from SQL server. I doubt that it is usefull in my case, since
>> I pull data from an SAP system.
>
> Then I'm doubly confused.
Now me too:-)