This just seems to slow
am 03.01.2011 02:51:48 von Jerry Schwartz
I'm trying to load data into a simple table, and it is taking many hours (and
still not done). I know hardware, etc., can have a big effect, but NOTHING
should have this big an effect.
=====
us-gii >show create table t_dmu_history\G
*************************** 1. row ***************************
Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
`t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
`DM_History_DM_ID` int(11) DEFAULT NULL,
`DM_History_Customer_ID` int(11) DEFAULT NULL,
PRIMARY KEY (`t_dmu_history_id`),
KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
=====
Here's a snip of what the input file looks like:
=====
SET autocommit=1;
#
# Dumping data for table 'T_DMU_History'
#
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299520);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299521);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299522);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299524);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299526);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299527);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299528);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299529);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299531);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299532);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299533);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299534);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299535);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298880);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298881);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298882);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298883);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298884);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298885);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298886);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298887);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298889);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298890);
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13073, 298891);
=====
There are about 870000 records.
I realize that using one INSERT per row is going to hurt, but I don't control
the format of the incoming data.
Besides, I'd have thought this would be pretty quick regardless of how clumsy
the method was.
Is that "autocommit" a problem? This is a bulk load into an empty table, so
I'm not worried about ACID.
Any suggestions?
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: This just seems to slow
am 03.01.2011 05:41:42 von mos
Jerry,
Use "Load Data Infile" when loading a lot of data. Whoever is giving
you the data should be able to dump it to a CSV file. Your imports will be
much faster.
Mike
At 07:51 PM 1/2/2011, you wrote:
>I'm trying to load data into a simple table, and it is taking many hours (and
>still not done). I know hardware, etc., can have a big effect, but NOTHING
>should have this big an effect.
>
>=====
>us-gii >show create table t_dmu_history\G
>*************************** 1. row ***************************
> Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> `DM_History_DM_ID` int(11) DEFAULT NULL,
> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> PRIMARY KEY (`t_dmu_history_id`),
> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>=====
>
>Here's a snip of what the input file looks like:
>=====
>SET autocommit=1;
>
>#
># Dumping data for table 'T_DMU_History'
>#
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299520);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299521);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299522);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299524);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299526);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299527);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299528);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299529);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299531);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299532);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299533);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299534);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299535);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298880);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298881);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298882);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298883);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298884);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298885);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298886);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298887);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298889);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298890);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298891);
>=====
>
>There are about 870000 records.
>
>I realize that using one INSERT per row is going to hurt, but I don't control
>the format of the incoming data.
>
>Besides, I'd have thought this would be pretty quick regardless of how clumsy
>the method was.
>
>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>I'm not worried about ACID.
>
>Any suggestions?
>
>
>
>
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: jerry@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 05:48:49 von Daevid Vincent
Another option would be to mangle your insert statement with some other
language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
instead. Something like:
INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
VALUES (13071, 299519),
VALUES (13071, 299520),
VALUES (13071, 299521),
....
That will radically speed up the inserts.
Also delete your INDEX / KEYs and add them at the very end instead.
-----Original Message-----
From: mos [mailto:mos99@fastmail.fm]
Sent: Sunday, January 02, 2011 8:42 PM
To: mysql@lists.mysql.com
Subject: Re: This just seems to slow
Jerry,
Use "Load Data Infile" when loading a lot of data. Whoever is giving
you the data should be able to dump it to a CSV file. Your imports will be
much faster.
Mike
At 07:51 PM 1/2/2011, you wrote:
>I'm trying to load data into a simple table, and it is taking many hours
(and
>still not done). I know hardware, etc., can have a big effect, but NOTHING
>should have this big an effect.
>
>=====
>us-gii >show create table t_dmu_history\G
>*************************** 1. row ***************************
> Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> `DM_History_DM_ID` int(11) DEFAULT NULL,
> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> PRIMARY KEY (`t_dmu_history_id`),
> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>=====
>
>Here's a snip of what the input file looks like:
>=====
>SET autocommit=1;
>
>#
># Dumping data for table 'T_DMU_History'
>#
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299520);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299521);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299522);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299524);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299526);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299527);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299528);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299529);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299531);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299532);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299533);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299534);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299535);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298880);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298881);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298882);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298883);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298884);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298885);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298886);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298887);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298889);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298890);
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13073, 298891);
>=====
>
>There are about 870000 records.
>
>I realize that using one INSERT per row is going to hurt, but I don't
control
>the format of the incoming data.
>
>Besides, I'd have thought this would be pretty quick regardless of how
clumsy
>the method was.
>
>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>I'm not worried about ACID.
>
>Any suggestions?
>
>
>
>
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: jerry@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=daevid@daevid.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: This just seems to slow
am 03.01.2011 10:41:31 von Wagner Bianchi
--001636498d93f68fa10498edf4fa
Content-Type: text/plain; charset=ISO-8859-1
Multiple line insert is the better choice...it will be organized in
transaction blocks of many lines and it will speed up data insertion.
[bianchi@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump
-e: extended-inserts
Best regards.
--
Wagner Bianchi
2011/1/3 Daevid Vincent
> Another option would be to mangle your insert statement with some other
> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
> instead. Something like:
>
> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> VALUES (13071, 299519),
> VALUES (13071, 299520),
> VALUES (13071, 299521),
> ...
>
> That will radically speed up the inserts.
>
> Also delete your INDEX / KEYs and add them at the very end instead.
>
> -----Original Message-----
> From: mos [mailto:mos99@fastmail.fm]
> Sent: Sunday, January 02, 2011 8:42 PM
> To: mysql@lists.mysql.com
> Subject: Re: This just seems to slow
>
> Jerry,
> Use "Load Data Infile" when loading a lot of data. Whoever is giving
> you the data should be able to dump it to a CSV file. Your imports will be
> much faster.
>
> Mike
>
> At 07:51 PM 1/2/2011, you wrote:
> >I'm trying to load data into a simple table, and it is taking many hours
> (and
> >still not done). I know hardware, etc., can have a big effect, but NOTHING
> >should have this big an effect.
> >
> >=====
> >us-gii >show create table t_dmu_history\G
> >*************************** 1. row ***************************
> > Table: t_dmu_history
> >Create Table: CREATE TABLE `t_dmu_history` (
> > `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> > `DM_History_DM_ID` int(11) DEFAULT NULL,
> > `DM_History_Customer_ID` int(11) DEFAULT NULL,
> > PRIMARY KEY (`t_dmu_history_id`),
> > KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> > KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
> >=====
> >
> >Here's a snip of what the input file looks like:
> >=====
> >SET autocommit=1;
> >
> >#
> ># Dumping data for table 'T_DMU_History'
> >#
> >
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299519);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299520);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299521);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299522);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299524);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299526);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299527);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299528);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299529);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299531);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299532);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299533);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299534);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13071, 299535);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298880);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298881);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298882);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298883);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298884);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298885);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298886);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298887);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298889);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298890);
> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >VALUES (13073, 298891);
> >=====
> >
> >There are about 870000 records.
> >
> >I realize that using one INSERT per row is going to hurt, but I don't
> control
> >the format of the incoming data.
> >
> >Besides, I'd have thought this would be pretty quick regardless of how
> clumsy
> >the method was.
> >
> >Is that "autocommit" a problem? This is a bulk load into an empty table,
> so
> >I'm not worried about ACID.
> >
> >Any suggestions?
> >
> >
> >
> >
> >
> >Regards,
> >
> >Jerry Schwartz
> >Global Information Incorporated
> >195 Farmington Ave.
> >Farmington, CT 06032
> >
> >860.674.8796 / FAX: 860.674.8341
> >E-mail: jerry@gii.co.jp
> >Web site: www.the-infoshop.com
> >
> >
> >
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=daevid@daevid.com
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=wagnerbianchijr@gmail.com
>
>
--001636498d93f68fa10498edf4fa--
RE: This just seems to slow
am 03.01.2011 16:41:20 von Jerry Schwartz
>-----Original Message-----
>From: Wagner Bianchi [mailto:wagnerbianchijr@gmail.com]
>Sent: Monday, January 03, 2011 4:42 AM
>To: Daevid Vincent
>Cc: mysql@lists.mysql.com; mos
>Subject: Re: This just seems to slow
>
>Multiple line insert is the better choice...it will be organized in
>transaction blocks of many lines and it will speed up data insertion.
>
>[bianchi@mysql.com]# mysqldump -u root -p --all-databases -e > file.dump
>
[JS] If only I were using mysqldump :-(.
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-e: extended-inserts
>
>Best regards.
>--
>Wagner Bianchi
>
>2011/1/3 Daevid Vincent
>
>> Another option would be to mangle your insert statement with some other
>> language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>> instead. Something like:
>>
>> INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> VALUES (13071, 299519),
>> VALUES (13071, 299520),
>> VALUES (13071, 299521),
>> ...
>>
>> That will radically speed up the inserts.
>>
>> Also delete your INDEX / KEYs and add them at the very end instead.
>>
>> -----Original Message-----
>> From: mos [mailto:mos99@fastmail.fm]
>> Sent: Sunday, January 02, 2011 8:42 PM
>> To: mysql@lists.mysql.com
>> Subject: Re: This just seems to slow
>>
>> Jerry,
>> Use "Load Data Infile" when loading a lot of data. Whoever is giving
>> you the data should be able to dump it to a CSV file. Your imports will be
>> much faster.
>>
>> Mike
>>
>> At 07:51 PM 1/2/2011, you wrote:
>> >I'm trying to load data into a simple table, and it is taking many hours
>> (and
>> >still not done). I know hardware, etc., can have a big effect, but NOTHING
>> >should have this big an effect.
>> >
>> >=====
>> >us-gii >show create table t_dmu_history\G
>> >*************************** 1. row ***************************
>> > Table: t_dmu_history
>> >Create Table: CREATE TABLE `t_dmu_history` (
>> > `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> > `DM_History_DM_ID` int(11) DEFAULT NULL,
>> > `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> > PRIMARY KEY (`t_dmu_history_id`),
>> > KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> > KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >=====
>> >
>> >Here's a snip of what the input file looks like:
>> >=====
>> >SET autocommit=1;
>> >
>> >#
>> ># Dumping data for table 'T_DMU_History'
>> >#
>> >
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299519);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299520);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299521);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299522);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299524);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299526);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299527);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299528);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299529);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299531);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299532);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299533);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299534);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13071, 299535);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298880);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298881);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298882);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298883);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298884);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298885);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298886);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298887);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298889);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298890);
>> >INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>> >VALUES (13073, 298891);
>> >=====
>> >
>> >There are about 870000 records.
>> >
>> >I realize that using one INSERT per row is going to hurt, but I don't
>> control
>> >the format of the incoming data.
>> >
>> >Besides, I'd have thought this would be pretty quick regardless of how
>> clumsy
>> >the method was.
>> >
>> >Is that "autocommit" a problem? This is a bulk load into an empty table,
>> so
>> >I'm not worried about ACID.
>> >
>> >Any suggestions?
>> >
>> >
>> >
>> >
>> >
>> >Regards,
>> >
>> >Jerry Schwartz
>> >Global Information Incorporated
>> >195 Farmington Ave.
>> >Farmington, CT 06032
>> >
>> >860.674.8796 / FAX: 860.674.8341
>> >E-mail: jerry@gii.co.jp
>> >Web site: www.the-infoshop.com
>> >
>> >
>> >
>> >
>> >
>> >--
>> >MySQL General Mailing List
>> >For list archives: http://lists.mysql.com/mysql
>> >To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe: http://lists.mysql.com/mysql?unsub=daevid@daevid.com
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/mysql?unsub=wagnerbianchijr@gmail.com
>>
>>
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 16:41:20 von Jerry Schwartz
>-----Original Message-----
>From: Daevid Vincent [mailto:daevid@daevid.com]
>Sent: Sunday, January 02, 2011 11:49 PM
>To: mysql@lists.mysql.com
>Cc: 'mos'
>Subject: RE: This just seems to slow
>
>Another option would be to mangle your insert statement with some other
>language like PHP, Python, Ruby, etc. so that the inserts are MULTI inserts
>instead. Something like:
>
>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>VALUES (13071, 299519),
>VALUES (13071, 299520),
>VALUES (13071, 299521),
>...
>
>That will radically speed up the inserts.
>
[JS] I thought of that, but unfortunately this is just one of 25-odd tables
(each in a different format, of course).
>Also delete your INDEX / KEYs and add them at the very end instead.
>
[JS] Wouldn't it take as long to build the indices? I guess it probably
wouldn't.
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-----Original Message-----
>From: mos [mailto:mos99@fastmail.fm]
>Sent: Sunday, January 02, 2011 8:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
> Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=====
>>us-gii >show create table t_dmu_history\G
>>*************************** 1. row ***************************
>> Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> PRIMARY KEY (`t_dmu_history_id`),
>> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=====
>>
>>Here's a snip of what the input file looks like:
>>=====
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298886);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298887);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298889);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298890);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298891);
>>=====
>>
>>There are about 870000 records.
>>
>>I realize that using one INSERT per row is going to hurt, but I don't
>control
>>the format of the incoming data.
>>
>>Besides, I'd have thought this would be pretty quick regardless of how
>clumsy
>>the method was.
>>
>>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>>I'm not worried about ACID.
>>
>>Any suggestions?
>>
>>
>>
>>
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=daevid@daevid.com
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
Re: This just seems to slow
am 03.01.2011 16:51:52 von shawn.l.green
On 1/3/2011 10:41, Jerry Schwartz wrote:
>> -----Original Message-----
>> From: Daevid Vincent [mailto:daevid@daevid.com]
>> Sent: Sunday, January 02, 2011 11:49 PM
>> ...
>
>> Also delete your INDEX / KEYs and add them at the very end instead.
>>
> [JS] Wouldn't it take as long to build the indices? I guess it probably
> wouldn't.
>
It will not. MySQL does not "grow" or "edit" its index files
incrementally, it computes a fresh on-disk index image for every change.
Right now, you are doing a complete index rebuild for every row you add.
If you add up the total work you are saving (index 121000 rows, index
121001 rows, index 121002 rows,...) then you can see a big improvement
by waiting to put the indexes on the table at the very end of the process.
http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html
Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 16:56:02 von Jerry Schwartz
Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.
Here's my CSV file, named "t_dmu_history.txt":
13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
....
Here's my mysqlimport command:
mysqlimport -uaccess -pxxx --delete --columns=`dm_history_dm_id`,`DM_History_Customer_ID`
--local --silent
--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
maintable_usa t_dmu_history.txt
I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
Win32 (ia32)"
It runs for awhile, but I wind up with only one record:
localhost >select * from t_dmu_history;
+------------------+------------------+--------------------- ---+
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+------------------+------------------+--------------------- ---+
| 1 | 13071 | NULL |
+------------------+------------------+--------------------- ---+
1 row in set (0.00 sec)
Obviously mysqlimport is parsing the input file incorrectly, but I don't know
why.
Here's the table itself:
+------------------------+---------+------+-----+---------+- ---------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+---------+------+-----+---------+- ---------------+
| t_dmu_history_id | int(11) | NO | PRI | NULL | auto_increment |
| DM_History_DM_ID | int(11) | YES | MUL | NULL | |
| DM_History_Customer_ID | int(11) | YES | MUL | NULL | |
+------------------------+---------+------+-----+---------+- ---------------+
Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
`t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
`DM_History_DM_ID` int(11) DEFAULT NULL,
`DM_History_Customer_ID` int(11) DEFAULT NULL,
PRIMARY KEY (`t_dmu_history_id`),
KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-----Original Message-----
>From: mos [mailto:mos99@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
> Use "Load Data Infile" when loading a lot of data. Whoever is giving
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>should have this big an effect.
>>
>>=====
>>us-gii >show create table t_dmu_history\G
>>*************************** 1. row ***************************
>> Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> PRIMARY KEY (`t_dmu_history_id`),
>> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>=====
>>
>>Here's a snip of what the input file looks like:
>>=====
>>SET autocommit=1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298886);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298887);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298889);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298890);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>VALUES (13073, 298891);
>>=====
>>
>>There are about 870000 records.
>>
>>I realize that using one INSERT per row is going to hurt, but I don't
>>control
>>the format of the incoming data.
>>
>>Besides, I'd have thought this would be pretty quick regardless of how
>>clumsy
>>the method was.
>>
>>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>>I'm not worried about ACID.
>>
>>Any suggestions?
>>
>>
>>
>>
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
mysqlimport doesn"t work for me
am 03.01.2011 19:33:11 von Jerry Schwartz
This works:
localhost >TRUNCATE t_dmu_history;
Query OK, 0 rows affected (0.41 sec)
localhost >LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history
FIELDS TERMINATED BY "," (`dm_history_dm_id`,`dm_history_customer_id`);
Query OK, 876211 rows affected (25.16 sec)
Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0
localhost >SELECT * FROM t_dmu_history LIMIT 4;
+------------------+------------------+--------------------- ---+
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+------------------+------------------+--------------------- ---+
| 1 | 13071 | 299519 |
| 2 | 13071 | 299520 |
| 3 | 13071 | 299521 |
| 4 | 13071 | 299522 |
+------------------+------------------+--------------------- ---+
4 rows in set (0.03 sec)
============================
This does not work:
localhost >TRUNCATE t_dmu_history;
localhost >quit
C:\Users\Jerry\Documents\Access MySQL
Production>mysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` --fields-terminated=','
--local --password=xxx --pipe --user=access --verbose maintable_usa
t_dmu_history.txt
Connecting to localhost
Selecting database maintable_usa
Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL
Production/t_dmu_history.txt into t_dmu_history
maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0
Warnings: 1752422
Disconnecting from localhost
localhost >SELECT * FROM t_dmu_history LIMIT 4;
+------------------+------------------+--------------------- ---+
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+------------------+------------------+--------------------- ---+
| 1 | 13071 | NULL |
| 2 | 13071 | NULL |
| 3 | 13071 | NULL |
| 4 | 13071 | NULL |
+------------------+------------------+--------------------- ---+
4 rows in set (0.00 sec)
=========================
Before you ask, the mysql CLI is also using a named pipe.
Windows Vista 32-bit
MySQL version 5.1.31-community
Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32)
What am I missing?
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 19:46:50 von Gavin Towey
I much prefer LOAD DATA INFILE to mysqlimport. The issue looks like you ha=
ve a file with two columns, and a table with three. You will probably need=
to be more specific about which columns map to which fields in the file. =
Please report the error with any commands you run.
Also, most importantly, how slow is "slow?" Have you measured the import s=
peed in terms of rows per second?
The largest factor I have found that influences overall import speed is the=
innodb_buffer_pool_size. Make sure you're not running with the default si=
ze. A buffer pool that's large enough to contain the secondary indexes on =
the table will also help a lot.
-----Original Message-----
From: Jerry Schwartz [mailto:jerry@gii.co.jp]
Sent: Monday, January 03, 2011 7:56 AM
To: 'mos'; mysql@lists.mysql.com
Subject: RE: This just seems to slow
Okay, I have a confession to make: I have never gotten Load Data Infile or
mysqlimport to work.
Here's my CSV file, named "t_dmu_history.txt":
13071,299519
13071,299520
13071,299521
13071,299522
13071,299524
13071,299526
13071,299527
....
Here's my mysqlimport command:
mysqlimport -uaccess -pxxx --delete --columns=3D`dm_history_dm_id`,`DM_Hist=
ory_Customer_ID`
--local --silent
--fields-terminated-by=3D',' --lines-terminated-by=3D'\r\n' --host=3Dlocalh=
ost
maintable_usa t_dmu_history.txt
I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, f=
or
Win32 (ia32)"
It runs for awhile, but I wind up with only one record:
localhost >select * from t_dmu_history;
+------------------+------------------+--------------------- ---+
| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
+------------------+------------------+--------------------- ---+
| 1 | 13071 | NULL |
+------------------+------------------+--------------------- ---+
1 row in set (0.00 sec)
Obviously mysqlimport is parsing the input file incorrectly, but I don't kn=
ow
why.
Here's the table itself:
+------------------------+---------+------+-----+---------+- ---------------=
+
| Field | Type | Null | Key | Default | Extra =
|
+------------------------+---------+------+-----+---------+- ---------------=
+
| t_dmu_history_id | int(11) | NO | PRI | NULL | auto_increment =
|
| DM_History_DM_ID | int(11) | YES | MUL | NULL | =
|
| DM_History_Customer_ID | int(11) | YES | MUL | NULL | =
|
+------------------------+---------+------+-----+---------+- ---------------=
+
Table: t_dmu_history
Create Table: CREATE TABLE `t_dmu_history` (
`t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
`DM_History_DM_ID` int(11) DEFAULT NULL,
`DM_History_Customer_ID` int(11) DEFAULT NULL,
PRIMARY KEY (`t_dmu_history_id`),
KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
) ENGINE=3DInnoDB AUTO_INCREMENT=3D2 DEFAULT CHARSET=3Dutf8
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-----Original Message-----
>From: mos [mailto:mos99@fastmail.fm]
>Sent: Sunday, January 02, 2011 11:42 PM
>To: mysql@lists.mysql.com
>Subject: Re: This just seems to slow
>
>Jerry,
> Use "Load Data Infile" when loading a lot of data. Whoever is givin=
g
>you the data should be able to dump it to a CSV file. Your imports will be
>much faster.
>
>Mike
>
>At 07:51 PM 1/2/2011, you wrote:
>>I'm trying to load data into a simple table, and it is taking many hours
>>(and
>>still not done). I know hardware, etc., can have a big effect, but NOTHIN=
G
>>should have this big an effect.
>>
>>=====3D
>>us-gii >show create table t_dmu_history\G
>>*************************** 1. row ***************************
>> Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> PRIMARY KEY (`t_dmu_history_id`),
>> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=3DInnoDB AUTO_INCREMENT=3D1446317 DEFAULT CHARSET=3Dutf8
>>=====3D
>>
>>Here's a snip of what the input file looks like:
>>=====3D
>>SET autocommit=3D1;
>>
>>#
>># Dumping data for table 'T_DMU_History'
>>#
>>
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299519);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299520);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299521);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299522);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299524);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299526);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299527);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299528);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299529);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299531);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299532);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299533);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299534);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13071, 299535);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298880);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298881);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298882);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298883);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298884);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298885);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298886);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298887);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298889);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298890);
>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`=
)
>>VALUES (13073, 298891);
>>=====3D
>>
>>There are about 870000 records.
>>
>>I realize that using one INSERT per row is going to hurt, but I don't
>>control
>>the format of the incoming data.
>>
>>Besides, I'd have thought this would be pretty quick regardless of how
>>clumsy
>>the method was.
>>
>>Is that "autocommit" a problem? This is a bulk load into an empty table, =
so
>>I'm not worried about ACID.
>>
>>Any suggestions?
>>
>>
>>
>>
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dmos99@fastmail.fm
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=3Djerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgtowey@ffn.com
IMPORTANT: This email message is intended only for the use of the individua=
l to whom, or entity to which, it is addressed and may contain information =
that is privileged, confidential and exempt from disclosure under applicabl=
e law. If you are NOT the intended recipient, you are hereby notified that =
any use, dissemination, distribution or copying of this communication is st=
rictly prohibited. If you have received this communication in error, pleas=
e reply to the sender immediately and permanently delete this email. Thank =
you.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=3Dgcdmg-mysql-2@m.gmane.o rg
Re: mysqlimport doesn"t work for me
am 03.01.2011 19:48:12 von Carsten Pedersen
It's been a long time sine I used mysqlimport, but you might want to try:
- using "--fields-terminated-by" rather than "--fields-terminated"
- losing (or escaping) the backticks in --columns=
- checking my.cnf to see if the client settings are the same for mysql>
and mysqlimport
- checking user privileges (are you using the same account in both
instances?)
- checking the line delimiter and --lines-terminated-by
FWIW, I always prefer tab-delimited files over comma-separated ones.
This gets around a lot of i18n issues.
/ Carsten
Den 03-01-2011 19:33, Jerry Schwartz skrev:
>
>
> This works:
>
> localhost>TRUNCATE t_dmu_history;
> Query OK, 0 rows affected (0.41 sec)
>
> localhost>LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE t_dmu_history
> FIELDS TERMINATED BY "," (`dm_history_dm_id`,`dm_history_customer_id`);
>
> Query OK, 876211 rows affected (25.16 sec)
> Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0
>
> localhost>SELECT * FROM t_dmu_history LIMIT 4;
> +------------------+------------------+--------------------- ---+
> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
> +------------------+------------------+--------------------- ---+
> | 1 | 13071 | 299519 |
> | 2 | 13071 | 299520 |
> | 3 | 13071 | 299521 |
> | 4 | 13071 | 299522 |
> +------------------+------------------+--------------------- ---+
> 4 rows in set (0.03 sec)
> ============================
> This does not work:
>
> localhost>TRUNCATE t_dmu_history;
> localhost>quit
>
> C:\Users\Jerry\Documents\Access MySQL
> Production>mysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id` --fields-terminated=','
> --local --password=xxx --pipe --user=access --verbose maintable_usa
> t_dmu_history.txt
> Connecting to localhost
> Selecting database maintable_usa
> Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL
> Production/t_dmu_history.txt into t_dmu_history
> maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0
> Warnings: 1752422
> Disconnecting from localhost
>
> localhost>SELECT * FROM t_dmu_history LIMIT 4;
> +------------------+------------------+--------------------- ---+
> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
> +------------------+------------------+--------------------- ---+
> | 1 | 13071 | NULL |
> | 2 | 13071 | NULL |
> | 3 | 13071 | NULL |
> | 4 | 13071 | NULL |
> +------------------+------------------+--------------------- ---+
> 4 rows in set (0.00 sec)
> =========================
>
> Before you ask, the mysql CLI is also using a named pipe.
>
> Windows Vista 32-bit
> MySQL version 5.1.31-community
> Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32)
>
> What am I missing?
>
> Regards,
>
> Jerry Schwartz
> Global Information Incorporated
> 195 Farmington Ave.
> Farmington, CT 06032
>
> 860.674.8796 / FAX: 860.674.8341
> E-mail: jerry@gii.co.jp
> Web site: www.the-infoshop.com
>
>
>
>
>
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 21:12:37 von Jerry Schwartz
>-----Original Message-----
>From: Gavin Towey [mailto:gtowey@ffn.com]
>Sent: Monday, January 03, 2011 1:47 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>I much prefer LOAD DATA INFILE to mysqlimport. The issue looks like you have
>a
>file with two columns, and a table with three. You will probably need to be
>more specific about which columns map to which fields in the file. Please
>report the error with any commands you run.
>
[JS] I gave this information in a new thread that I started, "mysqlimport
doesn't work for me."
>Also, most importantly, how slow is "slow?" Have you measured the import
>speed
>in terms of rows per second?
>
[JS] Let's just say you could use tree rings as a timer.
>The largest factor I have found that influences overall import speed is the
>innodb_buffer_pool_size. Make sure you're not running with the default size.
>A buffer pool that's large enough to contain the secondary indexes on the
>table
>will also help a lot.
>
[JS] I haven't done any tuning. I expected to do that later, when the
application went live. (Don't worry about that.)
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>
>
>-----Original Message-----
>From: Jerry Schwartz [mailto:jerry@gii.co.jp]
>Sent: Monday, January 03, 2011 7:56 AM
>To: 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Okay, I have a confession to make: I have never gotten Load Data Infile or
>mysqlimport to work.
>
>Here's my CSV file, named "t_dmu_history.txt":
>
>13071,299519
>13071,299520
>13071,299521
>13071,299522
>13071,299524
>13071,299526
>13071,299527
>...
>
>Here's my mysqlimport command:
>
>mysqlimport -uaccess -pxxx --delete --
>columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> --local --silent
>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>maintable_usa t_dmu_history.txt
>
>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
>Win32 (ia32)"
>
>It runs for awhile, but I wind up with only one record:
>
>localhost >select * from t_dmu_history;
>+------------------+------------------+-------------------- ----+
>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>+------------------+------------------+-------------------- ----+
>| 1 | 13071 | NULL |
>+------------------+------------------+-------------------- ----+
>1 row in set (0.00 sec)
>
>Obviously mysqlimport is parsing the input file incorrectly, but I don't know
>why.
>
>Here's the table itself:
>
>+------------------------+---------+------+-----+---------+ ----------------+
>| Field | Type | Null | Key | Default | Extra |
>+------------------------+---------+------+-----+---------+ ----------------+
>| t_dmu_history_id | int(11) | NO | PRI | NULL | auto_increment |
>| DM_History_DM_ID | int(11) | YES | MUL | NULL | |
>| DM_History_Customer_ID | int(11) | YES | MUL | NULL | |
>+------------------------+---------+------+-----+---------+ ----------------+
>
> Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> `DM_History_DM_ID` int(11) DEFAULT NULL,
> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> PRIMARY KEY (`t_dmu_history_id`),
> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: jerry@gii.co.jp
>Web site: www.the-infoshop.com
>
>
>>-----Original Message-----
>>From: mos [mailto:mos99@fastmail.fm]
>>Sent: Sunday, January 02, 2011 11:42 PM
>>To: mysql@lists.mysql.com
>>Subject: Re: This just seems to slow
>>
>>Jerry,
>> Use "Load Data Infile" when loading a lot of data. Whoever is giving
>>you the data should be able to dump it to a CSV file. Your imports will be
>>much faster.
>>
>>Mike
>>
>>At 07:51 PM 1/2/2011, you wrote:
>>>I'm trying to load data into a simple table, and it is taking many hours
>>>(and
>>>still not done). I know hardware, etc., can have a big effect, but NOTHING
>>>should have this big an effect.
>>>
>>>=====
>>>us-gii >show create table t_dmu_history\G
>>>*************************** 1. row ***************************
>>> Table: t_dmu_history
>>>Create Table: CREATE TABLE `t_dmu_history` (
>>> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>>> `DM_History_DM_ID` int(11) DEFAULT NULL,
>>> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>>> PRIMARY KEY (`t_dmu_history_id`),
>>> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>>> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>>>=====
>>>
>>>Here's a snip of what the input file looks like:
>>>=====
>>>SET autocommit=1;
>>>
>>>#
>>># Dumping data for table 'T_DMU_History'
>>>#
>>>
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299519);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299520);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299521);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299522);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299524);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299526);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299527);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299528);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299529);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299531);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299532);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299533);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299534);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13071, 299535);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298880);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298881);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298882);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298883);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298884);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298885);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298886);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298887);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298889);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298890);
>>>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
>>>VALUES (13073, 298891);
>>>=====
>>>
>>>There are about 870000 records.
>>>
>>>I realize that using one INSERT per row is going to hurt, but I don't
>>>control
>>>the format of the incoming data.
>>>
>>>Besides, I'd have thought this would be pretty quick regardless of how
>>>clumsy
>>>the method was.
>>>
>>>Is that "autocommit" a problem? This is a bulk load into an empty table, so
>>>I'm not worried about ACID.
>>>
>>>Any suggestions?
>>>
>>>
>>>
>>>
>>>
>>>Regards,
>>>
>>>Jerry Schwartz
>>>Global Information Incorporated
>>>195 Farmington Ave.
>>>Farmington, CT 06032
>>>
>>>860.674.8796 / FAX: 860.674.8341
>>>E-mail: jerry@gii.co.jp
>>>Web site: www.the-infoshop.com
>>>
>>>
>>>
>>>
>>>
>>>--
>>>MySQL General Mailing List
>>>For list archives: http://lists.mysql.com/mysql
>>>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>>
>>
>>--
>>MySQL General Mailing List
>>For list archives: http://lists.mysql.com/mysql
>>To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
>
>
>
>
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=gtowey@ffn.com
>
>
>IMPORTANT: This email message is intended only for the use of the individual
>to
>whom, or entity to which, it is addressed and may contain information that is
>privileged, confidential and exempt from disclosure under applicable law. If
>you are NOT the intended recipient, you are hereby notified that any use,
>dissemination, distribution or copying of this communication is strictly
>prohibited. If you have received this communication in error, please reply
>to
>the sender immediately and permanently delete this email. Thank you.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 03.01.2011 21:24:57 von mos
Jerry,
Try this:
mysqlimport -uusername -ppassword --verbose --debug-info --delete
--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
--fields-terminated-by="," --fields-optionally-enclosed-by="\""
--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
I use Load Data Infile all the time, usually inside of a program like Delphi.
Mike
At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>Okay, I have a confession to make: I have never gotten Load Data Infile or
>mysqlimport to work.
>
>Here's my CSV file, named "t_dmu_history.txt":
>
>13071,299519
>13071,299520
>13071,299521
>13071,299522
>13071,299524
>13071,299526
>13071,299527
>...
>
>Here's my mysqlimport command:
>
>mysqlimport -uaccess -pxxx --delete
>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> --local --silent
>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>maintable_usa t_dmu_history.txt
>
>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31, for
>Win32 (ia32)"
>
>It runs for awhile, but I wind up with only one record:
>
>localhost >select * from t_dmu_history;
>+------------------+------------------+-------------------- ----+
>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>+------------------+------------------+-------------------- ----+
>| 1 | 13071 | NULL |
>+------------------+------------------+-------------------- ----+
>1 row in set (0.00 sec)
>
>Obviously mysqlimport is parsing the input file incorrectly, but I don't know
>why.
>
>Here's the table itself:
>
>+------------------------+---------+------+-----+---------+ ----------------+
>| Field | Type | Null | Key | Default | Extra |
>+------------------------+---------+------+-----+---------+ ----------------+
>| t_dmu_history_id | int(11) | NO | PRI | NULL | auto_increment |
>| DM_History_DM_ID | int(11) | YES | MUL | NULL | |
>| DM_History_Customer_ID | int(11) | YES | MUL | NULL | |
>+------------------------+---------+------+-----+---------+ ----------------+
>
> Table: t_dmu_history
>Create Table: CREATE TABLE `t_dmu_history` (
> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> `DM_History_DM_ID` int(11) DEFAULT NULL,
> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> PRIMARY KEY (`t_dmu_history_id`),
> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: jerry@gii.co.jp
>Web site: www.the-infoshop.com
>
>
> >-----Original Message-----
> >From: mos [mailto:mos99@fastmail.fm]
> >Sent: Sunday, January 02, 2011 11:42 PM
> >To: mysql@lists.mysql.com
> >Subject: Re: This just seems to slow
> >
> >Jerry,
> > Use "Load Data Infile" when loading a lot of data. Whoever is giving
> >you the data should be able to dump it to a CSV file. Your imports will be
> >much faster.
> >
> >Mike
> >
> >At 07:51 PM 1/2/2011, you wrote:
> >>I'm trying to load data into a simple table, and it is taking many hours
> >>(and
> >>still not done). I know hardware, etc., can have a big effect, but NOTHING
> >>should have this big an effect.
> >>
> >>=====
> >>us-gii >show create table t_dmu_history\G
> >>*************************** 1. row ***************************
> >> Table: t_dmu_history
> >>Create Table: CREATE TABLE `t_dmu_history` (
> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> >> PRIMARY KEY (`t_dmu_history_id`),
> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
> >>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
> >>=====
> >>
> >>Here's a snip of what the input file looks like:
> >>=====
> >>SET autocommit=1;
> >>
> >>#
> >># Dumping data for table 'T_DMU_History'
> >>#
> >>
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299519);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299520);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299521);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299522);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299524);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299526);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299527);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299528);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299529);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299531);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299532);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299533);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299534);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13071, 299535);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298880);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298881);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298882);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298883);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298884);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298885);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298886);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298887);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298889);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298890);
> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`, `DM_History_Customer_ID`)
> >>VALUES (13073, 298891);
> >>=====
> >>
> >>There are about 870000 records.
> >>
> >>I realize that using one INSERT per row is going to hurt, but I don't
> >>control
> >>the format of the incoming data.
> >>
> >>Besides, I'd have thought this would be pretty quick regardless of how
> >>clumsy
> >>the method was.
> >>
> >>Is that "autocommit" a problem? This is a bulk load into an empty table, so
> >>I'm not worried about ACID.
> >>
> >>Any suggestions?
> >>
> >>
> >>
> >>
> >>
> >>Regards,
> >>
> >>Jerry Schwartz
> >>Global Information Incorporated
> >>195 Farmington Ave.
> >>Farmington, CT 06032
> >>
> >>860.674.8796 / FAX: 860.674.8341
> >>E-mail: jerry@gii.co.jp
> >>Web site: www.the-infoshop.com
> >>
> >>
> >>
> >>
> >>
> >>--
> >>MySQL General Mailing List
> >>For list archives: http://lists.mysql.com/mysql
> >>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: mysqlimport doesn"t work for me
am 03.01.2011 21:35:47 von Jerry Schwartz
>-----Original Message-----
>From: Carsten Pedersen [mailto:carsten@bitbybit.dk]
>Sent: Monday, January 03, 2011 1:48 PM
>To: Jerry Schwartz
>Cc: 'mos'; mysql@lists.mysql.com
>Subject: Re: mysqlimport doesn't work for me
>
>It's been a long time sine I used mysqlimport, but you might want to try:
>
>- using "--fields-terminated-by" rather than "--fields-terminated"
[JS] Good catch! Unfortunately, it didn't fix the problem:
Connecting to localhost
Selecting database maintable_usa
Deleting the old data from table t_dmu_history
Loading data from LOCAL file: C:/Users/Jerry/Documents/Access_MySQL
Tests/t_dmu_history.txt into t_dmu_history
maintable_usa.t_dmu_history: Records: 1 Deleted: 0 Skipped: 0 Warnings: 2
Disconnecting from localhost
I don't know if there's any way to find out what those warnings are.
>- losing (or escaping) the backticks in --columns=
>
[JS] The loons who designed this system were fond of putting spaces in the
table names. I've tried this particular table with and without the back-ticks.
Many of the table and field names are in Japanese, too. I shudder to think how
that will work out.
>- checking my.cnf to see if the client settings are the same for mysql>
>and mysqlimport
[JS] Good thought.
>- checking user privileges (are you using the same account in both
>instances?)
[JS] Yes.
>- checking the line delimiter and --lines-terminated-by
>
>FWIW, I always prefer tab-delimited files over comma-separated ones.
>This gets around a lot of i18n issues.
>
[JS] No doubt.
>/ Carsten
>
>Den 03-01-2011 19:33, Jerry Schwartz skrev:
>>
>>
>> This works:
>>
>> localhost>TRUNCATE t_dmu_history;
>> Query OK, 0 rows affected (0.41 sec)
>>
>> localhost>LOAD DATA LOCAL INFILE 't_dmu_history.txt' INTO TABLE
>> t_dmu_history
>> FIELDS TERMINATED BY "," (`dm_history_dm_id`,`dm_history_customer_id`);
>>
>> Query OK, 876211 rows affected (25.16 sec)
>> Records: 876211 Deleted: 0 Skipped: 0 Warnings: 0
>>
>> localhost>SELECT * FROM t_dmu_history LIMIT 4;
>> +------------------+------------------+--------------------- ---+
>> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> +------------------+------------------+--------------------- ---+
>> | 1 | 13071 | 299519 |
>> | 2 | 13071 | 299520 |
>> | 3 | 13071 | 299521 |
>> | 4 | 13071 | 299522 |
>> +------------------+------------------+--------------------- ---+
>> 4 rows in set (0.03 sec)
>> ============================
>> This does not work:
>>
>> localhost>TRUNCATE t_dmu_history;
>> localhost>quit
>>
>> C:\Users\Jerry\Documents\Access MySQL
>> Production>mysqlimport --columns=`dm_history_dm_id`,`dm_history_customer_id`
>--fields-terminated=','
>> --local --password=xxx --pipe --user=access --verbose maintable_usa
>> t_dmu_history.txt
>> Connecting to localhost
>> Selecting database maintable_usa
>> Loading data from LOCAL file: C:/Users/Jerry/Documents/Access MySQL
>> Production/t_dmu_history.txt into t_dmu_history
>> maintable_usa.t_dmu_history: Records: 876211 Deleted: 0 Skipped: 0
>> Warnings: 1752422
>> Disconnecting from localhost
>>
>> localhost>SELECT * FROM t_dmu_history LIMIT 4;
>> +------------------+------------------+--------------------- ---+
>> | t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> +------------------+------------------+--------------------- ---+
>> | 1 | 13071 | NULL |
>> | 2 | 13071 | NULL |
>> | 3 | 13071 | NULL |
>> | 4 | 13071 | NULL |
>> +------------------+------------------+--------------------- ---+
>> 4 rows in set (0.00 sec)
>> =========================
>>
>> Before you ask, the mysql CLI is also using a named pipe.
>>
>> Windows Vista 32-bit
>> MySQL version 5.1.31-community
>> Mysqlimport Ver 3.7 Distrib 5.1.31, for Win32 (ia32)
>>
>> What am I missing?
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>> E-mail: jerry@gii.co.jp
>> Web site: www.the-infoshop.com
>>
>>
>>
>>
>>
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 04.01.2011 00:08:13 von Jerry Schwartz
Folks, this is getting creepy. It seems like each of the variations you've
given me works some times and not others. I haven't found the pattern yet.
During the hours it took my initial data loads to finish, I rewrote the import
process so that (I hope) I won't have to go through this again.
Thanks for your help.
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-----Original Message-----
>From: mos [mailto:mos99@fastmail.fm]
>Sent: Monday, January 03, 2011 3:25 PM
>To: Jerry Schwartz; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>Jerry,
> Try this:
>
>mysqlimport -uusername -ppassword --verbose --debug-info --delete
>--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>--fields-terminated-by="," --fields-optionally-enclosed-by="\""
>--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>
>
>I use Load Data Infile all the time, usually inside of a program like Delphi.
>
>Mike
>
>At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>>Okay, I have a confession to make: I have never gotten Load Data Infile or
>>mysqlimport to work.
>>
>>Here's my CSV file, named "t_dmu_history.txt":
>>
>>13071,299519
>>13071,299520
>>13071,299521
>>13071,299522
>>13071,299524
>>13071,299526
>>13071,299527
>>...
>>
>>Here's my mysqlimport command:
>>
>>mysqlimport -uaccess -pxxx --delete
>>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>> --local --silent
>>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>>maintable_usa t_dmu_history.txt
>>
>>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>>for
>>Win32 (ia32)"
>>
>>It runs for awhile, but I wind up with only one record:
>>
>>localhost >select * from t_dmu_history;
>>+------------------+------------------+------------------- -----+
>>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>>+------------------+------------------+------------------- -----+
>>| 1 | 13071 | NULL |
>>+------------------+------------------+------------------- -----+
>>1 row in set (0.00 sec)
>>
>>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>>know
>>why.
>>
>>Here's the table itself:
>>
>>+------------------------+---------+------+-----+--------- +----------------+
>>| Field | Type | Null | Key | Default | Extra |
>>+------------------------+---------+------+-----+--------- +----------------+
>>| t_dmu_history_id | int(11) | NO | PRI | NULL | auto_increment |
>>| DM_History_DM_ID | int(11) | YES | MUL | NULL | |
>>| DM_History_Customer_ID | int(11) | YES | MUL | NULL | |
>>+------------------------+---------+------+-----+--------- +----------------+
>>
>> Table: t_dmu_history
>>Create Table: CREATE TABLE `t_dmu_history` (
>> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> PRIMARY KEY (`t_dmu_history_id`),
>> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-----Original Message-----
>> >From: mos [mailto:mos99@fastmail.fm]
>> >Sent: Sunday, January 02, 2011 11:42 PM
>> >To: mysql@lists.mysql.com
>> >Subject: Re: This just seems to slow
>> >
>> >Jerry,
>> > Use "Load Data Infile" when loading a lot of data. Whoever is
>> > giving
>> >you the data should be able to dump it to a CSV file. Your imports will be
>> >much faster.
>> >
>> >Mike
>> >
>> >At 07:51 PM 1/2/2011, you wrote:
>> >>I'm trying to load data into a simple table, and it is taking many hours
>> >>(and
>> >>still not done). I know hardware, etc., can have a big effect, but
>> >>NOTHING
>> >>should have this big an effect.
>> >>
>> >>=====
>> >>us-gii >show create table t_dmu_history\G
>> >>*************************** 1. row ***************************
>> >> Table: t_dmu_history
>> >>Create Table: CREATE TABLE `t_dmu_history` (
>> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >> PRIMARY KEY (`t_dmu_history_id`),
>> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >>=====
>> >>
>> >>Here's a snip of what the input file looks like:
>> >>=====
>> >>SET autocommit=1;
>> >>
>> >>#
>> >># Dumping data for table 'T_DMU_History'
>> >>#
>> >>
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299519);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299520);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299521);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299522);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299524);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299526);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299527);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299528);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299529);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299531);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299532);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299533);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299534);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13071, 299535);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298880);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298881);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298882);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298883);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298884);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298885);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298886);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298887);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298889);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298890);
>> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >>`DM_History_Customer_ID`)
>> >>VALUES (13073, 298891);
>> >>=====
>> >>
>> >>There are about 870000 records.
>> >>
>> >>I realize that using one INSERT per row is going to hurt, but I don't
>> >>control
>> >>the format of the incoming data.
>> >>
>> >>Besides, I'd have thought this would be pretty quick regardless of how
>> >>clumsy
>> >>the method was.
>> >>
>> >>Is that "autocommit" a problem? This is a bulk load into an empty table,
>> >>so
>> >>I'm not worried about ACID.
>> >>
>> >>Any suggestions?
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>Regards,
>> >>
>> >>Jerry Schwartz
>> >>Global Information Incorporated
>> >>195 Farmington Ave.
>> >>Farmington, CT 06032
>> >>
>> >>860.674.8796 / FAX: 860.674.8341
>> >>E-mail: jerry@gii.co.jp
>> >>Web site: www.the-infoshop.com
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>--
>> >>MySQL General Mailing List
>> >>For list archives: http://lists.mysql.com/mysql
>> >>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>> >
>> >
>> >--
>> >MySQL General Mailing List
>> >For list archives: http://lists.mysql.com/mysql
>> >To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 04.01.2011 18:26:23 von mos
At 05:08 PM 1/3/2011, Jerry Schwartz wrote:
>Folks, this is getting creepy. It seems like each of the variations you've
>given me works some times and not others. I haven't found the pattern yet.
The SQL I sent you works on my server just fine with your table and your
data. BTW, there is no way to get a list of warnings produced from
MySQLImport. You will need to switch to SQL and execute "Load Data InFile
....." and after that has executed, do a "Show Warnings" to display the
warnings. I always use "Load Data Infile" and not MySQLImport because I
have more control over its execution. The Load Data should be 10x faster
than using Inserts so it is worth considering.
The MySQL server prefers the import file to be in the server's data
directory. If you are trying to import it from another location you need to
change your my.ini file. Please see the article
http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
Mike
>During the hours it took my initial data loads to finish, I rewrote the
>import
>process so that (I hope) I won't have to go through this again.
>
>Thanks for your help.
>
>Regards,
>
>Jerry Schwartz
>Global Information Incorporated
>195 Farmington Ave.
>Farmington, CT 06032
>
>860.674.8796 / FAX: 860.674.8341
>E-mail: jerry@gii.co.jp
>Web site: www.the-infoshop.com
>
>
> >-----Original Message-----
> >From: mos [mailto:mos99@fastmail.fm]
> >Sent: Monday, January 03, 2011 3:25 PM
> >To: Jerry Schwartz; mysql@lists.mysql.com
> >Subject: RE: This just seems to slow
> >
> >Jerry,
> > Try this:
> >
> >mysqlimport -uusername -ppassword --verbose --debug-info --delete
> >--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
> >--fields-terminated-by="," --fields-optionally-enclosed-by="\""
> >--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
> >
> >
> >I use Load Data Infile all the time, usually inside of a program like
> Delphi.
> >
> >Mike
> >
> >At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
> >>Okay, I have a confession to make: I have never gotten Load Data Infile or
> >>mysqlimport to work.
> >>
> >>Here's my CSV file, named "t_dmu_history.txt":
> >>
> >>13071,299519
> >>13071,299520
> >>13071,299521
> >>13071,299522
> >>13071,299524
> >>13071,299526
> >>13071,299527
> >>...
> >>
> >>Here's my mysqlimport command:
> >>
> >>mysqlimport -uaccess -pxxx --delete
> >>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
> >> --local --silent
> >>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
> >>maintable_usa t_dmu_history.txt
> >>
> >>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
> >>for
> >>Win32 (ia32)"
> >>
> >>It runs for awhile, but I wind up with only one record:
> >>
> >>localhost >select * from t_dmu_history;
> >>+------------------+------------------+------------------- -----+
> >>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
> >>+------------------+------------------+------------------- -----+
> >>| 1 | 13071 | NULL |
> >>+------------------+------------------+------------------- -----+
> >>1 row in set (0.00 sec)
> >>
> >>Obviously mysqlimport is parsing the input file incorrectly, but I don't
> >>know
> >>why.
> >>
> >>Here's the table itself:
> >>
> >>+------------------------+---------+------+-----+--------- +-------------
> ---+
> >>| Field | Type | Null | Key | Default |
> Extra |
> >>+------------------------+---------+------+-----+--------- +-------------
> ---+
> >>| t_dmu_history_id | int(11) | NO | PRI | NULL |
> auto_increment |
> >>| DM_History_DM_ID | int(11) | YES | MUL |
> NULL | |
> >>| DM_History_Customer_ID | int(11) | YES | MUL |
> NULL | |
> >>+------------------------+---------+------+-----+--------- +-------------
> ---+
> >>
> >> Table: t_dmu_history
> >>Create Table: CREATE TABLE `t_dmu_history` (
> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> >> PRIMARY KEY (`t_dmu_history_id`),
> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
> >>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
> >>
> >>Regards,
> >>
> >>Jerry Schwartz
> >>Global Information Incorporated
> >>195 Farmington Ave.
> >>Farmington, CT 06032
> >>
> >>860.674.8796 / FAX: 860.674.8341
> >>E-mail: jerry@gii.co.jp
> >>Web site: www.the-infoshop.com
> >>
> >>
> >> >-----Original Message-----
> >> >From: mos [mailto:mos99@fastmail.fm]
> >> >Sent: Sunday, January 02, 2011 11:42 PM
> >> >To: mysql@lists.mysql.com
> >> >Subject: Re: This just seems to slow
> >> >
> >> >Jerry,
> >> > Use "Load Data Infile" when loading a lot of data. Whoever is
> >> > giving
> >> >you the data should be able to dump it to a CSV file. Your imports
> will be
> >> >much faster.
> >> >
> >> >Mike
> >> >
> >> >At 07:51 PM 1/2/2011, you wrote:
> >> >>I'm trying to load data into a simple table, and it is taking many hours
> >> >>(and
> >> >>still not done). I know hardware, etc., can have a big effect, but
> >> >>NOTHING
> >> >>should have this big an effect.
> >> >>
> >> >>=====
> >> >>us-gii >show create table t_dmu_history\G
> >> >>*************************** 1. row ***************************
> >> >> Table: t_dmu_history
> >> >>Create Table: CREATE TABLE `t_dmu_history` (
> >> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
> >> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
> >> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
> >> >> PRIMARY KEY (`t_dmu_history_id`),
> >> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
> >> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
> >> >>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
> >> >>=====
> >> >>
> >> >>Here's a snip of what the input file looks like:
> >> >>=====
> >> >>SET autocommit=1;
> >> >>
> >> >>#
> >> >># Dumping data for table 'T_DMU_History'
> >> >>#
> >> >>
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299519);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299520);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299521);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299522);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299524);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299526);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299527);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299528);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299529);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299531);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299532);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299533);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299534);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13071, 299535);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298880);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298881);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298882);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298883);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298884);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298885);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298886);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298887);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298889);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298890);
> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
> >> >>`DM_History_Customer_ID`)
> >> >>VALUES (13073, 298891);
> >> >>=====
> >> >>
> >> >>There are about 870000 records.
> >> >>
> >> >>I realize that using one INSERT per row is going to hurt, but I don't
> >> >>control
> >> >>the format of the incoming data.
> >> >>
> >> >>Besides, I'd have thought this would be pretty quick regardless of how
> >> >>clumsy
> >> >>the method was.
> >> >>
> >> >>Is that "autocommit" a problem? This is a bulk load into an empty
> table,
> >> >>so
> >> >>I'm not worried about ACID.
> >> >>
> >> >>Any suggestions?
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>Regards,
> >> >>
> >> >>Jerry Schwartz
> >> >>Global Information Incorporated
> >> >>195 Farmington Ave.
> >> >>Farmington, CT 06032
> >> >>
> >> >>860.674.8796 / FAX: 860.674.8341
> >> >>E-mail: jerry@gii.co.jp
> >> >>Web site: www.the-infoshop.com
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>--
> >> >>MySQL General Mailing List
> >> >>For list archives: http://lists.mysql.com/mysql
> >> >>To unsubscribe: http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
> >> >
> >> >
> >> >--
> >> >MySQL General Mailing List
> >> >For list archives: http://lists.mysql.com/mysql
> >> >To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org
RE: This just seems to slow
am 04.01.2011 23:22:11 von Jerry Schwartz
I did wind up using LOAD DATA INFILE.
When I started, I was afraid that I was going to process about 20 tables every
day; but I redid the data exchange to avoid that.
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
E-mail: jerry@gii.co.jp
Web site: www.the-infoshop.com
>-----Original Message-----
>From: mos [mailto:mos99@fastmail.fm]
>Sent: Tuesday, January 04, 2011 12:26 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>At 05:08 PM 1/3/2011, Jerry Schwartz wrote:
>>Folks, this is getting creepy. It seems like each of the variations you've
>>given me works some times and not others. I haven't found the pattern yet.
>
>The SQL I sent you works on my server just fine with your table and your
>data. BTW, there is no way to get a list of warnings produced from
>MySQLImport. You will need to switch to SQL and execute "Load Data InFile
>...." and after that has executed, do a "Show Warnings" to display the
>warnings. I always use "Load Data Infile" and not MySQLImport because I
>have more control over its execution. The Load Data should be 10x faster
>than using Inserts so it is worth considering.
>
>The MySQL server prefers the import file to be in the server's data
>directory. If you are trying to import it from another location you need to
>change your my.ini file. Please see the article
>http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
>
>Mike
>
>
>>During the hours it took my initial data loads to finish, I rewrote the
>>import
>>process so that (I hope) I won't have to go through this again.
>>
>>Thanks for your help.
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: jerry@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-----Original Message-----
>> >From: mos [mailto:mos99@fastmail.fm]
>> >Sent: Monday, January 03, 2011 3:25 PM
>> >To: Jerry Schwartz; mysql@lists.mysql.com
>> >Subject: RE: This just seems to slow
>> >
>> >Jerry,
>> > Try this:
>> >
>> >mysqlimport -uusername -ppassword --verbose --debug-info --delete
>> >--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>> >--fields-terminated-by="," --fields-optionally-enclosed-by="\""
>> >--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>> >
>> >
>> >I use Load Data Infile all the time, usually inside of a program like
>> Delphi.
>> >
>> >Mike
>> >
>> >At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>> >>Okay, I have a confession to make: I have never gotten Load Data Infile
>> >>or
>> >>mysqlimport to work.
>> >>
>> >>Here's my CSV file, named "t_dmu_history.txt":
>> >>
>> >>13071,299519
>> >>13071,299520
>> >>13071,299521
>> >>13071,299522
>> >>13071,299524
>> >>13071,299526
>> >>13071,299527
>> >>...
>> >>
>> >>Here's my mysqlimport command:
>> >>
>> >>mysqlimport -uaccess -pxxx --delete
>> >>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>> >> --local --silent
>> >>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>> >>maintable_usa t_dmu_history.txt
>> >>
>> >>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>> >>for
>> >>Win32 (ia32)"
>> >>
>> >>It runs for awhile, but I wind up with only one record:
>> >>
>> >>localhost >select * from t_dmu_history;
>> >>+------------------+------------------+------------------- -----+
>> >>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> >>+------------------+------------------+------------------- -----+
>> >>| 1 | 13071 | NULL |
>> >>+------------------+------------------+------------------- -----+
>> >>1 row in set (0.00 sec)
>> >>
>> >>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>> >>know
>> >>why.
>> >>
>> >>Here's the table itself:
>> >>
>> >>+------------------------+---------+------+-----+--------- +-------------
>> ---+
>> >>| Field | Type | Null | Key | Default |
>> Extra |
>> >>+------------------------+---------+------+-----+--------- +-------------
>> ---+
>> >>| t_dmu_history_id | int(11) | NO | PRI | NULL |
>> auto_increment |
>> >>| DM_History_DM_ID | int(11) | YES | MUL |
>> NULL | |
>> >>| DM_History_Customer_ID | int(11) | YES | MUL |
>> NULL | |
>> >>+------------------------+---------+------+-----+--------- +-------------
>> ---+
>> >>
>> >> Table: t_dmu_history
>> >>Create Table: CREATE TABLE `t_dmu_history` (
>> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >> PRIMARY KEY (`t_dmu_history_id`),
>> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>> >>
>> >>Regards,
>> >>
>> >>Jerry Schwartz
>> >>Global Information Incorporated
>> >>195 Farmington Ave.
>> >>Farmington, CT 06032
>> >>
>> >>860.674.8796 / FAX: 860.674.8341
>> >>E-mail: jerry@gii.co.jp
>> >>Web site: www.the-infoshop.com
>> >>
>> >>
>> >> >-----Original Message-----
>> >> >From: mos [mailto:mos99@fastmail.fm]
>> >> >Sent: Sunday, January 02, 2011 11:42 PM
>> >> >To: mysql@lists.mysql.com
>> >> >Subject: Re: This just seems to slow
>> >> >
>> >> >Jerry,
>> >> > Use "Load Data Infile" when loading a lot of data. Whoever is
>> >> > giving
>> >> >you the data should be able to dump it to a CSV file. Your imports
>> will be
>> >> >much faster.
>> >> >
>> >> >Mike
>> >> >
>> >> >At 07:51 PM 1/2/2011, you wrote:
>> >> >>I'm trying to load data into a simple table, and it is taking many
>> >> >>hours
>> >> >>(and
>> >> >>still not done). I know hardware, etc., can have a big effect, but
>> >> >>NOTHING
>> >> >>should have this big an effect.
>> >> >>
>> >> >>=====
>> >> >>us-gii >show create table t_dmu_history\G
>> >> >>*************************** 1. row ***************************
>> >> >> Table: t_dmu_history
>> >> >>Create Table: CREATE TABLE `t_dmu_history` (
>> >> >> `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >> >> `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >> >> `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >> >> PRIMARY KEY (`t_dmu_history_id`),
>> >> >> KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >> >> KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >> >>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >> >>=====
>> >> >>
>> >> >>Here's a snip of what the input file looks like:
>> >> >>=====
>> >> >>SET autocommit=1;
>> >> >>
>> >> >>#
>> >> >># Dumping data for table 'T_DMU_History'
>> >> >>#
>> >> >>
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299519);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299520);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299521);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299522);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299524);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299526);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299527);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299528);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299529);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299531);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299532);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299533);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299534);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299535);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298880);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298881);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298882);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298883);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298884);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298885);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298886);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298887);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298889);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298890);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298891);
>> >> >>=====
>> >> >>
>> >> >>There are about 870000 records.
>> >> >>
>> >> >>I realize that using one INSERT per row is going to hurt, but I don't
>> >> >>control
>> >> >>the format of the incoming data.
>> >> >>
>> >> >>Besides, I'd have thought this would be pretty quick regardless of how
>> >> >>clumsy
>> >> >>the method was.
>> >> >>
>> >> >>Is that "autocommit" a problem? This is a bulk load into an empty
>> table,
>> >> >>so
>> >> >>I'm not worried about ACID.
>> >> >>
>> >> >>Any suggestions?
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>Regards,
>> >> >>
>> >> >>Jerry Schwartz
>> >> >>Global Information Incorporated
>> >> >>195 Farmington Ave.
>> >> >>Farmington, CT 06032
>> >> >>
>> >> >>860.674.8796 / FAX: 860.674.8341
>> >> >>E-mail: jerry@gii.co.jp
>> >> >>Web site: www.the-infoshop.com
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>--
>> >> >>MySQL General Mailing List
>> >> >>For list archives: http://lists.mysql.com/mysql
>> >> >>To unsubscribe:
>> >> >>http://lists.mysql.com/mysql?unsub=mos99@fastmail.fm
>> >> >
>> >> >
>> >> >--
>> >> >MySQL General Mailing List
>> >> >For list archives: http://lists.mysql.com/mysql
>> >> >To unsubscribe: http://lists.mysql.com/mysql?unsub=jerry@gii.co.jp
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=gcdmg-mysql-2@m.gmane.org