Bug with AUTO_INCREMENT in BDB table transactions

Bug with AUTO_INCREMENT in BDB table transactions

am 10.07.2002 18:39:43 von Steve Hay

=========================
BRIEF PROBLEM DESCRIPTION
=========================

I have a problem simultaneously running two instances of a Perl program
that rapidly repeats a transaction which INSERT's a row into a BDB table
with an AUTO_INCREMENT column.

No value is specified for the AUTO_INCREMENT column: it is left up to
the database to choose the "next" value. After a large (but variable)
number of iterations it apparently chooses the value "1", and then gives
the error:

Duplicate entry '1' for key 1

when trying to COMMIT the transaction because the value "1" has already
been used.

I have checked the MySQL documentation and mailing list archives
(http://lists.mysql.com), and have posted several questions about this
problem to the mailing list (mysql@lists.mysql.com) myself, but haven't
found any answers yet.

===================
VERSION INFORMATION
===================

MySQL Version: Ver 3.23.51-max-nt for NT on i32
MySQL Distribution: Binary distribution from http://www.mysql.com

Machine Manufacturer: Hewlett Packard
Machine Model: Vectra VL 600 Series

Processor: Intel Pentium III, 667MHz
Physical Memory: 384MB
Virtual Memory: 139MB

OS Name: Microsoft Windows NT Workstation
OS Version: 4.0 (Build 1381: Service Pack 6)

Perl Version: v5.6.1 built for MSWin32-x86
DBI Version: 1.28
DBD::mysql Version: 2.1017

========================
FULL PROBLEM DESCRIPTION
========================

How-To-Repeat:

The problem can be reproduced by following the steps below on the system
described in "VERSION INFORMATION" above.

Start the mysqld-max-nt server. This has been installed as an NT Service
with the "--install" command-line option. The "my.ini" configuration
file located in the "WINNT" folder contains the following lines only:

[mysqld]
basedir = D:/mysql
skip-innodb

Open two Command Prompts. Make sure that the paths to the MySQL "bin"
folder and the Perl "bin" folder are on the PATH in each, e.g.:

SET PATH=D:\WINNT\system32;D:\WINNT;D:\mysql\bin;D:\perl5\bin

Run the following commands to create the "test" database that will be
used in what follows:

mysqladmin -f drop test >NUL 2>&1
mysqladmin create test
mysql -u root -e "CREATE TABLE foo (id INT AUTO_INCREMENT PRIMARY KEY)
TYPE=BDB" test

Run the following command to confirm the database structure that has
been created:

mysqldump --no-data test

This should output:

-- MySQL dump 8.22
--
-- Host: localhost Database: test
---------------------------------------------------------
-- Server version 3.23.51-max-nt

--
-- Table structure for table 'foo'
--

CREATE TABLE foo (
id int(11) NOT NULL auto_increment,
PRIMARY KEY (id)
) TYPE=BerkeleyDB;

Put the following Perl program into a file called "insert.pl":

use strict;
use warnings;

use DBI;

my $dbh = DBI->connect(
'dbi:mysql:test', 'root', '',
{AutoCommit => 0, PrintError => 0, RaiseError => 1}
);

for (my $deadlocks = 0, my $total = 0, my $i = 1; $i <= 50000; $i++) {
$total++;

eval {
$dbh->do("INSERT INTO foo VALUES (NULL)");
$dbh->commit();
};

if ($@) {
if ($DBI::err == 1213) {
$deadlocks++;
print "$total: $i: Deadlock! ($deadlocks)\n";

eval {
$dbh->rollback();
};

if ($@) {
print "Rollback Error! ($@)\n";
last;
}

redo;
}
else {
print "$total: $i: Error! ($@)\n";
last;
}
}
else {
$deadlocks = 0;
print "$total: $i: Success!\n";
}
}

$dbh->disconnect();

Now run two instances of this Perl program simultaneously by running:

perl insert.pl

in each Command Prompt.

Each program tries to INSERT a row into the table "foo", letting the
database choose a value for the "id" AUTO_INCREMENT column.

Very often a deadlock error (number 1213) occurs with the instruction
to retry the transaction -- the program above therefore detects that
error and retries the transaction as instructed. I don't understand why
this program should give deadlock errors - it doesn't obviously suffer
from any of the usual reasons for deadlock (cycle deadlock, conversion
deadlock or thread deadlock) - and it never gives deadlock errors when
run on a Sybase database system on the same machine.

Anyway, typical output from each program starts off something like:

1: 1: Deadlock! (1)
2: 1: Deadlock! (2)
3: 1: Success!
4: 2: Success!
5: 3: Success!
6: 4: Success!
7: 5: Success!
8: 6: Success!
9: 7: Success!
10: 8: Success!
11: 9: Success!
12: 10: Success!
13: 11: Success!
14: 12: Success!
15: 13: Success!
16: 14: Deadlock! (1)
17: 14: Success!
18: 15: Success!
19: 16: Success!
20: 17: Success!
21: 18: Success!
22: 19: Success!
23: 20: Success!
24: 21: Success!
25: 22: Success!
26: 23: Success!
27: 24: Success!
28: 25: Success!
29: 26: Success!
30: 27: Deadlock! (1)
31: 27: Deadlock! (2)
32: 27: Deadlock! (3)
33: 27: Deadlock! (4)
34: 27: Deadlock! (5)
35: 27: Success!
36: 28: Success!
37: 29: Success!
38: 30: Deadlock! (1)
39: 30: Success!
40: 31: Success!

Sometimes there are long periods in which one program succeeds every
time and the other deadlocks every time, and then they'll swap around
with the first program deadlocking every time and the other succeeding
every time.

They each carry on succeeding or deadlocking for a large, but variable,
number of iterations, but eventually it ends up with both programs
deadlocking every time. After running like that for a while one of them
then gets a different error, namely:

Duplicate entry '1' for key 1

Typical output from the end of the program that dies looks like:

7900: 2170: Deadlock! (459)
7901: 2170: Deadlock! (460)
7902: 2170: Deadlock! (461)
7903: 2170: Deadlock! (462)
7904: 2170: Deadlock! (463)
7905: 2170: Deadlock! (464)
7906: 2170: Deadlock! (465)
7907: 2170: Deadlock! (466)
7908: 2170: Deadlock! (467)
7909: 2170: Deadlock! (468)
7910: 2170: Deadlock! (469)
7911: 2170: Deadlock! (470)
7912: 2170: Deadlock! (471)
7913: 2170: Deadlock! (472)
7914: 2170: Deadlock! (473)
7915: 2170: Deadlock! (474)
7916: 2170: Deadlock! (475)
7917: 2170: Deadlock! (476)
7918: 2170: Deadlock! (477)
7919: 2170: Error! (DBD::mysql::db do failed: Duplicate entry '1' for
key 1 at i
nsert.pl line 15.
)

The numbers shown here vary every time the test is run -- it doesn't
always fail after 7919 tries or 2170 successes or 477 consecutive
deadlocks.

Sometimes the error happens after only a few thousand tries, sometimes
it doesn't happen until 40 or 50 thousand tries, so you may need to
increase the number of iterations performed before you see the error
when you run it on a different specification system.

The error always seems to come, though, and always a very long way
before the maximum INT value has been reached, so it is not simply a
case of the INT overflowing and starting over at 1 (not that it would do
so anyway as far as I'm aware).

The error can also be reproduced using the mysql command-line tool, as
follows. (Recreate the "test" database before starting this so that we
are running on a clean database every time.)

Put the following Perl program into a file called "sql.pl":

use strict;
use warnings;

print "SET AUTOCOMMIT=0;\n";

for (1 .. 50000) {
print "INSERT INTO foo VALUES (NULL);\n";
print "COMMIT;\n";
}

Now run:

perl sql.pl | mysql -f -u root test 2> session1.txt

in one Command Prompt, and:

perl sql.pl | mysql -f -u root test 2> session2.txt

in the other Command Prompt.

This time the deadlock errors are ignored, rather than retried, and the
duplicate entry error is also ignored, rather than causing the test to
exit. The errors are written to STDERR by the mysql tool, and STDERR is
then re-directed to two log files, session1.txt and session2.txt.

Inspecting the two log files created by these tests, one typically finds
thousands of deadlock errors and one or two duplicate entry errors,
e.g.:

ERROR 1213 at line 6980: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6982: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6984: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6986: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6988: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6990: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6992: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6994: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 6996: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1062 at line 6998: Duplicate entry '1' for key 1
ERROR 1213 at line 7000: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 7110: Deadlock found when trying to get lock; Try
restarting transaction
ERROR 1213 at line 7412: Deadlock found when trying to get lock; Try
restarting transaction

I have found that if I use a MyISAM table instead of a BDB table, or
even if I use a BDB table but set "AUTOCOMMIT=1" and don't bother with
explicit COMMIT and ROLLBACK statements, then I don't get this problem.
However, I need transaction support in the software that I'm working on
so this is not a viable workaround.

I would greatly appreciate a fix for this problem, and also an
explanation of why there are so many deadlock errors using MySQL when
other database systems running the same test programs don't get any.

Thanks in advance,

Steve Hay

Senior Software Applications Engineer
Radan Computational Limited



------------------------------------------------------------ ---------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)

To request this thread, e-mail bugs-thread12174@lists.mysql.com
To unsubscribe, e-mail

Re: Bug with AUTO_INCREMENT in BDB table transactions

am 14.07.2002 12:40:00 von Peter Zaitsev

Hello Steve,

Wednesday, July 10, 2002, 8:39:43 PM, you wrote:


SH> =========================
SH> BRIEF PROBLEM DESCRIPTION
SH> =========================

SH> I have a problem simultaneously running two instances of a Perl program
SH> that rapidly repeats a transaction which INSERT's a row into a BDB table
SH> with an AUTO_INCREMENT column.

SH> No value is specified for the AUTO_INCREMENT column: it is left up to
SH> the database to choose the "next" value. After a large (but variable)
SH> number of iterations it apparently chooses the value "1", and then gives
SH> the error:

SH> Duplicate entry '1' for key 1

SH> when trying to COMMIT the transaction because the value "1" has already
SH> been used.

Thank you for your bug report. We will take a look it and see if
there is possibility to fix this for BDB tables.

Currently I could propose you to use INNODB tables as work around.
They also support transactions, as well as have many other features.



SH> Very often a deadlock error (number 1213) occurs with the instruction
SH> to retry the transaction -- the program above therefore detects that
SH> error and retries the transaction as instructed. I don't understand why
SH> this program should give deadlock errors - it doesn't obviously suffer
SH> from any of the usual reasons for deadlock (cycle deadlock, conversion
SH> deadlock or thread deadlock) - and it never gives deadlock errors when
SH> run on a Sybase database system on the same machine.

This is well known issue with BDB tables. BDB tables use page level
locking which is known to produce more deadlock errors compared to row
level locking. This is one more reason to use INNODB tables as they
have row level locking + consistent reads and so you are likely not to
get that many deadlock errors, if any.




--
For technical support contracts, visit https://order.mysql.com/?ref=mpza
__ ___ ___ ____ __
/ |/ /_ __/ __/ __ \/ / Mr. Peter Zaitsev
/ /|_/ / // /\ \/ /_/ / /__ MySQL AB, Full-Time Developer
/_/ /_/\_, /___/\___\_\___/ Moscow, Russia
<___/ www.mysql.com M: +7 095 725 4955


------------------------------------------------------------ ---------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)

To request this thread, e-mail bugs-thread12181@lists.mysql.com
To unsubscribe, e-mail