Slow ALTER TABLE on 70M row InnoDB table

Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 17:34:24 von Xn Nooby

I have been trying to speed up an ALTER TABLE command that adds a
column to a large InnoDB table of about 80M rows.

I have found and tried many different methods, but they are all slow.I
have tried both optimizing the ALTER TABLE

command, and dumping and loading the table (in both SQL and CSV
formats). The table size is about 10GB, and the

combined index size is about 6GB. I am trying to understand why it is slow.

I have read that dumping and loading in the CSV format should be the
absolute fastest, and it does only take 20

minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My

understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I

don't understand why it takes so long.

I have read that breaking a SQL dump in to "chunks" is also supposed
to be fast, but later chunks insert more slowly

than earlier chunks. This is with keys disabled, and other options disabled.

Ideally I could stick with the ALTER TABLE command, and I have played
around with a lot of the buffer settings. My

understanding is, any enabled key indices need to fit in to RAM, and I
have played around with a lot of those

settings. Mainly I have increased the buffer size and log buffer size.

When importing records, I see the "free buffers" slowly run-out, and
the import speed drops off when the buffers are

used up. The first few million rows import at up to 30k rows per
second, but eventually it slows to a crawl. I have

read a lot about this on the mysqlperformance blog.

There is a lot of information on the web about this topic, but I am
not always sure which parts are for ISAM and

which apply to InnoDB. I have not experimented with ISAM, since my
tables are InnoDB.

This process is slow on a larger box, which belongs to someone else,
and on my own desktop PC.

Should I stick with trying to make ALTER TABLE work, or should I be
trying to get LOAD FILE to work?

Any suggestions on adding a column to a large table?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 18:57:54 von mos

At 10:34 AM 8/18/2010, Xn Nooby wrote:

>minutes to dump the 70M rows. However, it takes the LOAD FILE command
>13 hours to import the CSV file. My
>
>understanding of LOAD FILE was that it was already optimized to load
>the data, then build the indices afterwords. I
>
>don't understand why it takes so long.

A common misconception about Load Data Infile command is that it will
rebuild the indexes later only if you are loading data into an empty table
(which you probably are) and only for the non-unique indexes. The Unique
and Primary indexes are still being built during the loading of the data
and I suspect this is why it is slowing down over time.

Before loading the data, I would recommend removing ALL of the indexes.
Then after the data has been loaded, issue a single Alter statement to
rebuild all of the indexes in this one command. See if that makes a
difference.

The other alternative is to create another table with the new table
structure but without the indexes. Then do a

insert into newtable select * from oldtable;

and then create the indexes on the new table with a single Alter statement.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

RE: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 19:56:27 von Travis Ard

What are you using as your primary key on this table? Is an auto_increment
field or something non-sequential? Do you have your secondary indexes in
place while you load the table or are you explicitly disabling them and
re-enabling them afterward?

-Travis

-----Original Message-----
From: Xn Nooby [mailto:xnooby@gmail.com]
Sent: Wednesday, August 18, 2010 9:34 AM
To: mysql@lists.mysql.com
Subject: Slow ALTER TABLE on 70M row InnoDB table

I have been trying to speed up an ALTER TABLE command that adds a
column to a large InnoDB table of about 80M rows.

I have found and tried many different methods, but they are all slow.I
have tried both optimizing the ALTER TABLE

command, and dumping and loading the table (in both SQL and CSV
formats). The table size is about 10GB, and the

combined index size is about 6GB. I am trying to understand why it is slow.

I have read that dumping and loading in the CSV format should be the
absolute fastest, and it does only take 20

minutes to dump the 70M rows. However, it takes the LOAD FILE command
13 hours to import the CSV file. My

understanding of LOAD FILE was that it was already optimized to load
the data, then build the indices afterwords. I

don't understand why it takes so long.

I have read that breaking a SQL dump in to "chunks" is also supposed
to be fast, but later chunks insert more slowly

than earlier chunks. This is with keys disabled, and other options disabled.

Ideally I could stick with the ALTER TABLE command, and I have played
around with a lot of the buffer settings. My

understanding is, any enabled key indices need to fit in to RAM, and I
have played around with a lot of those

settings. Mainly I have increased the buffer size and log buffer size.

When importing records, I see the "free buffers" slowly run-out, and
the import speed drops off when the buffers are

used up. The first few million rows import at up to 30k rows per
second, but eventually it slows to a crawl. I have

read a lot about this on the mysqlperformance blog.

There is a lot of information on the web about this topic, but I am
not always sure which parts are for ISAM and

which apply to InnoDB. I have not experimented with ISAM, since my
tables are InnoDB.

This process is slow on a larger box, which belongs to someone else,
and on my own desktop PC.

Should I stick with trying to make ALTER TABLE work, or should I be
trying to get LOAD FILE to work?

Any suggestions on adding a column to a large table?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=travis_ard@hotmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 21:52:42 von Xn Nooby

Below is a generic version of the code I am trying. It does copy the
rows very quickly, but I will have to test to see how quickly the
indices are built. Is the below code what you were suggesting? I had
a little trouble dropping and later adding the primary index, but I
think I got it figured out.

Below I basically do this:
make the_table_clone from the the_table
drop the indices on the_table_clone
copy the row from the_table to the_table_clone
add the indices back to the_table_clone

If this runs fast enough, I will then drop the_table, and rename
the_table_clone to the_table


USE the_database;


DROP TABLE IF EXISTS the_table_clone;

CREATE TABLE the_table_clone LIKE the_table;


# drop minor indices on clone

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;

ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;


# drop primary index on clone

ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;

ALTER TABLE the_table_clone DROP PRIMARY KEY;


# add 2 new columns to clone

ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
price_amount float DEFAULT '0';


# copy rows

INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT 0,10000000;

#INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;


# Add back indices in one command (for max speed)

ALTER TABLE the_table_clone \
ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
MODIFY id INT SIGNED AUTO_INCREMENT,\
ADD PRIMARY KEY(col1);

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 22:50:24 von mos

At 02:52 PM 8/18/2010, Xn Nooby wrote:
>Below is a generic version of the code I am trying. It does copy the
>rows very quickly, but I will have to test to see how quickly the
>indices are built. Is the below code what you were suggesting? I had
>a little trouble dropping and later adding the primary index, but I
>think I got it figured out.
>
>Below I basically do this:
> make the_table_clone from the the_table
> drop the indices on the_table_clone
> copy the row from the_table to the_table_clone
> add the indices back to the_table_clone
>
>If this runs fast enough, I will then drop the_table, and rename
>the_table_clone to the_table
>
>
>USE the_database;
>
>
>DROP TABLE IF EXISTS the_table_clone;
>
>CREATE TABLE the_table_clone LIKE the_table;

Or you can try something like:

create table the_table_clone engine=innodb select * from the_table limit=0;
This will create the same table structure but not the indexes so you don't
have to drop the indexes below.



># drop minor indices on clone
>
>ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>
>ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>
>ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>
>
># drop primary index on clone
>
>ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;

You still need the statement above to change the autoinc to integer if you
use my Create Table... statement above.



>ALTER TABLE the_table_clone DROP PRIMARY KEY;
>
>
># add 2 new columns to clone
>
>ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>price_amount float DEFAULT '0';
>
>
># copy rows
>
>INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT 0,10000000;
>
>#INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;

Why do you have two insert statements? If you are inserting a group of
records at a time then you need a limit statement on each, and increment
the offset by the number of rows that have been added.

I would explicitly specify the column list for both the Insert and the
Select to make sure they match up. There is no point going through all this
if it inserts the data into the wrong columns!
Check the data before creating the indexes to make sure the same number of
rows have been copied over and the data is in the correct columns.



># Add back indices in one command (for max speed)
>
>ALTER TABLE the_table_clone \
> ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
> ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
> ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
> MODIFY id INT SIGNED AUTO_INCREMENT,\
> ADD PRIMARY KEY(col1);

Correct.

Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 22:57:58 von Xn Nooby

Hi Mike, my comments are below:

On Wed, Aug 18, 2010 at 4:50 PM, mos wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying. =A0It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built. =A0Is the below code what you were suggesting? =A0I h=
ad
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>> =A0make the_table_clone from the the_table
>> =A0drop the indices on the_table_clone
>> =A0copy the row from the_table to the_table_clone
>> =A0add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=3Dinnodb select * from the_table limi=
t=3D0;
> This will create the same table structure but not the indexes so you don'=
t
> have to drop the indexes below.


That is good to know. I did not mind dropping the indices in this
case, because the table was still empty.


>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if yo=
u
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,10000000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment =
the
> offset by the number of rows that have been added.


The 2nd INSERT is disabled with the # character. I am using the
statement with the LIMIT for testing, and will switch to the other
command when I want to process all the records.


>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all th=
is
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number o=
f
> rows have been copied over and the data is in the correct columns.


Okay. I thought it was safe to assume that the new columns would
appear on the "right-side" of the column list.


>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>> =A0ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>> =A0ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>> =A0ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>> =A0MODIFY id INT SIGNED AUTO_INCREMENT,\
>> =A0ADD PRIMARY KEY(col1);
>
> Correct.


The insert took 7 minutes on 10M rows, and that ALTER command took
another 46 minutes.


>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 23:06:08 von Xn Nooby

It appears the ALTER TABLE starts off quick, and then slows down. I
feel like the indices are larger than allocated RAM, and the system is
slowing down because it is busy swapping out to disk. Is there an
InnoDB specific buffer than can help this? The "sort_buffer_size"
apparently is only for ISAM files.

My "InnoDB specific" settings are:

innodb_additional_mem_pool_size=3D13M
innodb_flush_log_at_trx_commit=3D2
innodb_log_buffer_size=3D7M
innodb_buffer_pool_size=3D616M
innodb_log_file_size=3D24M
innodb_thread_concurrency=3D10



On Wed, Aug 18, 2010 at 4:50 PM, mos wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying. =A0It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built. =A0Is the below code what you were suggesting? =A0I h=
ad
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>> =A0make the_table_clone from the the_table
>> =A0drop the indices on the_table_clone
>> =A0copy the row from the_table to the_table_clone
>> =A0add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=3Dinnodb select * from the_table limi=
t=3D0;
> This will create the same table structure but not the indexes so you don'=
t
> have to drop the indexes below.
>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if yo=
u
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,10000000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment =
the
> offset by the number of rows that have been added.
>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all th=
is
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number o=
f
> rows have been copied over and the data is in the correct columns.
>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>> =A0ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>> =A0ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>> =A0ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>> =A0MODIFY id INT SIGNED AUTO_INCREMENT,\
>> =A0ADD PRIMARY KEY(col1);
>
> Correct.
>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

Re: Slow ALTER TABLE on 70M row InnoDB table

am 18.08.2010 23:22:05 von Xn Nooby

From what I have read, ALTER TABLE to add an index causes the entire
table to be duplicated, so wouldn't my ALTER TABLE command be
duplicating the work done by the SELECT command?



On Wed, Aug 18, 2010 at 4:50 PM, mos wrote:
> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>
>> Below is a generic version of the code I am trying. =A0It does copy the
>> rows very quickly, but I will have to test to see how quickly the
>> indices are built. =A0Is the below code what you were suggesting? =A0I h=
ad
>> a little trouble dropping and later adding the primary index, but I
>> think I got it figured out.
>>
>> Below I basically do this:
>> =A0make the_table_clone from the the_table
>> =A0drop the indices on the_table_clone
>> =A0copy the row from the_table to the_table_clone
>> =A0add the indices back to the_table_clone
>>
>> If this runs fast enough, I will then drop the_table, and rename
>> the_table_clone to the_table
>>
>>
>> USE the_database;
>>
>>
>> DROP TABLE IF EXISTS the_table_clone;
>>
>> CREATE TABLE the_table_clone LIKE the_table;
>
> Or you can try something like:
>
> create table the_table_clone engine=3Dinnodb select * from the_table limi=
t=3D0;
> This will create the same table structure but not the indexes so you don'=
t
> have to drop the indexes below.
>
>
>
>> # drop minor indices on clone
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>
>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>
>>
>> # drop primary index on clone
>>
>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>
> You still need the statement above to change the autoinc to integer if yo=
u
> use my Create Table... statement above.
>
>
>
>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>
>>
>> # add 2 new columns to clone
>>
>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>> price_amount float DEFAULT '0';
>>
>>
>> # copy rows
>>
>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>> 0,10000000;
>>
>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>
> Why do you have two insert statements? If you are inserting a group of
> records at a time then you need a limit statement on each, and increment =
the
> offset by the number of rows that have been added.
>
> I would explicitly specify the column list for both the Insert and the
> Select to make sure they match up. There is no point going through all th=
is
> if it inserts the data into the wrong columns!
> Check the data before creating the indexes to make sure the same number o=
f
> rows have been copied over and the data is in the correct columns.
>
>
>
>> # Add back indices in one command (for max speed)
>>
>> ALTER TABLE the_table_clone \
>> =A0ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>> =A0ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>> =A0ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>> =A0MODIFY id INT SIGNED AUTO_INCREMENT,\
>> =A0ADD PRIMARY KEY(col1);
>
> Correct.
>
> Mike
>
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg

Re: Slow ALTER TABLE on 70M row InnoDB table

am 21.08.2010 20:56:42 von Eric Bergen

Most alter table operations in 5.0 will rebuild the entire table. The
best thing to increase for alter table speed in innodb is the buffer
pool. For more details on how innodb handles alter table see
http://ebergen.net/wordpress/2007/05/07/how-alter-table-lock s-tables-and-ha=
ndles-transactions/

On Wednesday, August 18, 2010, Xn Nooby wrote:
> From what I have read, ALTER TABLE to add an index causes the entire
> table to be duplicated, so wouldn't my ALTER TABLE command be
> duplicating the work done by the SELECT command?
>
>
>
> On Wed, Aug 18, 2010 at 4:50 PM, mos wrote:
>> At 02:52 PM 8/18/2010, Xn Nooby wrote:
>>>
>>> Below is a generic version of the code I am trying. =A0It does copy the
>>> rows very quickly, but I will have to test to see how quickly the
>>> indices are built. =A0Is the below code what you were suggesting? =A0I =
had
>>> a little trouble dropping and later adding the primary index, but I
>>> think I got it figured out.
>>>
>>> Below I basically do this:
>>> =A0make the_table_clone from the the_table
>>> =A0drop the indices on the_table_clone
>>> =A0copy the row from the_table to the_table_clone
>>> =A0add the indices back to the_table_clone
>>>
>>> If this runs fast enough, I will then drop the_table, and rename
>>> the_table_clone to the_table
>>>
>>>
>>> USE the_database;
>>>
>>>
>>> DROP TABLE IF EXISTS the_table_clone;
>>>
>>> CREATE TABLE the_table_clone LIKE the_table;
>>
>> Or you can try something like:
>>
>> create table the_table_clone engine=3Dinnodb select * from the_table lim=
it=3D0;
>> This will create the same table structure but not the indexes so you don=
't
>> have to drop the indexes below.
>>
>>
>>
>>> # drop minor indices on clone
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;
>>>
>>> ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;
>>>
>>>
>>> # drop primary index on clone
>>>
>>> ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;
>>
>> You still need the statement above to change the autoinc to integer if y=
ou
>> use my Create Table... statement above.
>>
>>
>>
>>> ALTER TABLE the_table_clone DROP PRIMARY KEY;
>>>
>>>
>>> # add 2 new columns to clone
>>>
>>> ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
>>> price_amount float DEFAULT '0';
>>>
>>>
>>> # copy rows
>>>
>>> INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
>>> 0,10000000;
>>>
>>> #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;
>>
>> Why do you have two insert statements? If you are inserting a group of
>> records at a time then you need a limit statement on each, and increment=
the
>> offset by the number of rows that have been added.
>>
>> I would explicitly specify the column list for both the Insert and the
>> Select to make sure they match up. There is no point going through all t=
his
>> if it inserts the data into the wrong columns!
>> Check the data before creating the indexes to make sure the same number =
of
>> rows have been copied over and the data is in the correct columns.
>>
>>
>>
>>> # Add back indices in one command (for max speed)
>>>
>>> ALTER TABLE the_table_clone \
>>> =A0ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
>>> =A0ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
>>> =A0ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
>>> =A0MODIFY id INT SIGNED AUTO_INCREMENT,\
>>> =A0ADD PRIMARY KEY(col1);
>>
>> Correct.
>>
>> Mike
>>
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: =A0 =A0http://lists.mysql.com/mysql?unsub=3Deric.bergen@g=
mail.com
>
>

--=20
Eric Bergen
eric.bergen@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg