Table optimization ideas needed

Table optimization ideas needed

am 25.03.2008 04:30:41 von skylark

Hi all,

I made a post a week ago to ask for the idea of the fastest way to get
table records.
Fyi,
http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table

Look at the time even a 'count(1)' took.
Then you can imagine how much time sql such as "select a,b from
table_name where c='d'" will take.

I have a lot of tables like that. So my questions is:
What's your practice to optimize tables like that?

Thanks in advance.

--
Regards,
Shelley


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 25.03.2008 04:50:02 von dmagick

Shelley wrote:
> Hi all,
>
> I made a post a week ago to ask for the idea of the fastest way to get
> table records.
> Fyi,
> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
>
>
> Look at the time even a 'count(1)' took.
> Then you can imagine how much time sql such as "select a,b from
> table_name where c='d'" will take.
>
> I have a lot of tables like that. So my questions is:
> What's your practice to optimize tables like that?

I pretty much follow what I've said in this article:

http://www.designmagick.com/article/16/PostgreSQL/How-to-ind ex-a-database

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 25.03.2008 06:18:50 von JH

From a quick perusal of the article Chris mentions, I'd generally agree
with that view about table optimization -- I'm not an expert on
Postgres, but the recommendations generally seem to apply to MySQL as well.

My basic view is that, if you are routinely doing a select on millions
of rows, you probably need to take a step back and consider your general
structure.

Without revising the structure and other than indexing as Chris
suggested, a couple off-the-cuff ideas: if the stability of the table is
not critical, use MyISAM tables rather than InnoDB tables; try using
stored procedures (MySQL>=5.0).

While it isn't always true, my experience is that any table with a
million rows or more is a problem created because the initial assumption
was that the table would never grow that large so the general data
structure was not fully thought through.

Google is capable of handling searches through billions of rows of data
not because it uses supercomputers but because of its data structure.

Just my two centavos,

Jeff

Chris wrote:
> Shelley wrote:
>> Hi all,
>>
>> I made a post a week ago to ask for the idea of the fastest way to
>> get table records.
>> Fyi,
>> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
>>
>>
>> Look at the time even a 'count(1)' took.
>> Then you can imagine how much time sql such as "select a,b from
>> table_name where c='d'" will take.
>>
>> I have a lot of tables like that. So my questions is:
>> What's your practice to optimize tables like that?
>
> I pretty much follow what I've said in this article:
>
> http://www.designmagick.com/article/16/PostgreSQL/How-to-ind ex-a-database
>


--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 26.03.2008 03:16:02 von skylark

------=_Part_13103_27930516.1206497762150
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Yes, Index can help a lot.
But actually there has been five indices. The table takes 1.4G space while
the indices take 2.3G.
The select sentence is still slow. :(

On Tue, Mar 25, 2008 at 11:50 AM, Chris wrote:

> Shelley wrote:
> > Hi all,
> >
> > I made a post a week ago to ask for the idea of the fastest way to get
> > table records.
> > Fyi,
> >
> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
> >
> >
> > Look at the time even a 'count(1)' took.
> > Then you can imagine how much time sql such as "select a,b from
> > table_name where c='d'" will take.
> >
> > I have a lot of tables like that. So my questions is:
> > What's your practice to optimize tables like that?
>
> I pretty much follow what I've said in this article:
>
> http://www.designmagick.com/article/16/PostgreSQL/How-to-ind ex-a-database
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_13103_27930516.1206497762150--

Re: Table optimization ideas needed

am 26.03.2008 03:20:12 von dmagick

Shelley wrote:
> Yes, Index can help a lot.
> But actually there has been five indices. The table takes 1.4G space
> while the indices take 2.3G.
> The select sentence is still slow. :(

Post your exact query, table definition(s), indexes and see if anyone
has some suggestions.

If it's a mysql db, join one of the mysql lists and see if anyone has
some suggestions (though they will ask for the same info). Same for any
other db.

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 26.03.2008 06:18:28 von skylark

------=_Part_2988_13502168.1206508708250
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

+--------------+-----------------------+------+-----+------- ------------+----------------+
| Field | Type | Null | Key | Default |
Extra |
+--------------+-----------------------+------+-----+------- ------------+----------------+
| id | int(11) | | PRI | NULL |
auto_increment |
| owner_id | int(11) | | MUL | 0
| |
| owner_name | varchar(50) | | |
| |
| visitor_id | int(11) | | MUL | 0
| |
| visitor_name | varchar(100) | | |
| |
| visit_time | timestamp | YES | | CURRENT_TIMESTAMP
| |
| first_time | int(10) unsigned | | | 0
| |
| last_time | int(10) unsigned | | MUL | 0
| |
| visit_num | mediumint(8) unsigned | | | 0
| |
| status | tinyint(3) unsigned | | MUL | 0
| |
+--------------+-----------------------+------+-----+------- ------------+----------------+

That's the table which has more than 20 million records.

On Wed, Mar 26, 2008 at 10:20 AM, Chris wrote:

> Shelley wrote:
> > Yes, Index can help a lot.
> > But actually there has been five indices. The table takes 1.4G space
> > while the indices take 2.3G.
> > The select sentence is still slow. :(
>
> Post your exact query, table definition(s), indexes and see if anyone
> has some suggestions.
>
> If it's a mysql db, join one of the mysql lists and see if anyone has
> some suggestions (though they will ask for the same info). Same for any
> other db.
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_2988_13502168.1206508708250--

Re: Table optimization ideas needed

am 26.03.2008 06:27:04 von dmagick

Shelley wrote:
> +--------------+-----------------------+------+-----+------- ------------+----------------+
> | Field | Type | Null | Key | Default
> | Extra |
> +--------------+-----------------------+------+-----+------- ------------+----------------+
> | id | int(11) | | PRI | NULL
> | auto_increment |
> | owner_id | int(11) | | MUL | 0
> | |
> | owner_name | varchar(50) | | |
> | |
> | visitor_id | int(11) | | MUL | 0
> | |
> | visitor_name | varchar(100) | | |
> | |
> | visit_time | timestamp | YES | | CURRENT_TIMESTAMP
> | |
> | first_time | int(10) unsigned | | | 0
> | |
> | last_time | int(10) unsigned | | MUL | 0
> | |
> | visit_num | mediumint(8) unsigned | | | 0
> | |
> | status | tinyint(3) unsigned | | MUL | 0
> | |
> +--------------+-----------------------+------+-----+------- ------------+----------------+
>
> That's the table which has more than 20 million records.

And what query are you running?

What does:

explain your_query_here;

show?

I can see indexes on at least owner_id, visitor_id, last_time and
status, but at least one of those is across multiple columns ('MUL').

Can you show us the index definitions:

show indexes from table_name;

or

show create table table_name;

and just include the indexes at the bottom.

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 26.03.2008 15:05:35 von Roberto Mansfield

Shelley wrote:
> Hi all,
>
> I made a post a week ago to ask for the idea of the fastest way to get
> table records.
> Fyi,
> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
>

Hi Shelly,

I question your mysql database setup. I have a log table with about 2
million records which I used for comparison. Here are the queries you
mentioned in your link above:

> SELECT COUNT(*) FROM test_table;
> SELECT COUNT(id) FROM test_table;
> SELECT COUNT(1) FROM test_table;
>
> The results goes here:
>
> mysql> SELECT COUNT(*) FROM test_table;
> +----------+
> | count(*) |
> +----------+
> | 20795139 |
> +----------+
> 1 row in set (1 min 8.22 sec)

A count(*) against the entire table does not scan all rows so this
should be very fast. In my case, a full table count was about .06 seconds.


> mysql> SELECT COUNT(id) FROM test_table;
> +-----------+
> | count(id) |
> +-----------+
> | 20795139 |
> +-----------+
> 1 row in set (1 min 1.45 sec)

This query counts all the rows where id is not null. This DOES require a
scan of all your table rows and so will be much slower. In my table,
this type of query ran about 6.6 seconds. Scaling to 10x (for 20 million
records), this query would take about a minute which compares to your
results.

> mysql> SELECT COUNT(1) FROM test_table;
> +----------+
> | count(1) |
> +----------+
> | 20795139 |
> +----------+
> 1 row in set (56.67 sec)

This query was very fast in my table as well. Since "1" is never null,
there is no full table scan.


> Then you can imagine how much time sql such as "select a,b from
> table_name where c='d'" will take.

If c is indexed and the number of resulting rows is "small" then this
will be fast. BUt if you need to retrieve several hundred thousand rows,
this will take time to process. In my log table, a count(*) of rows with
a restriction:

select count(*) from log where username = 'test';

returned 104777 in ~ .4 seconds, but retrieveing all those records (from
a separate mysql db host) took 15 seconds.

Roberto

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Re: Table optimization ideas needed

am 27.03.2008 00:26:17 von dmagick

Roberto Mansfield wrote:
> Shelley wrote:
>> Hi all,
>>
>> I made a post a week ago to ask for the idea of the fastest way to get
>> table records.
>> Fyi,
>> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
>>
>
> Hi Shelly,
>
> I question your mysql database setup. I have a log table with about 2
> million records which I used for comparison. Here are the queries you
> mentioned in your link above:
>
>> SELECT COUNT(*) FROM test_table;
>> SELECT COUNT(id) FROM test_table;
>> SELECT COUNT(1) FROM test_table;
>>
>> The results goes here:
>>
>> mysql> SELECT COUNT(*) FROM test_table;
>> +----------+
>> | count(*) |
>> +----------+
>> | 20795139 |
>> +----------+
>> 1 row in set (1 min 8.22 sec)
>
> A count(*) against the entire table does not scan all rows so this
> should be very fast. In my case, a full table count was about .06 seconds.

You're assuming she's using a myisam table - which will indeed be fast.
Switch to an innodb table (or falcon if you're feeling adventurous) and
you'll have this issue because they support transactions and are prone
to MVCC issues.

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Re: Table optimization ideas needed

am 27.03.2008 02:33:10 von skylark

------=_Part_792_24428823.1206581590553
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Yes, Chris. You are right. I think I mentioned in the archive that the table
is Innodb engined.
Maybe Roberto didn't notice that.

On Thu, Mar 27, 2008 at 7:26 AM, Chris wrote:

> Roberto Mansfield wrote:
>
> > Shelley wrote:
> >
> > > Hi all,
> > >
> > > I made a post a week ago to ask for the idea of the fastest way to get
> > > table records.
> > > Fyi,
> > >
> > > http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
> > >
> > >
> > Hi Shelly,
> >
> > I question your mysql database setup. I have a log table with about 2
> > million records which I used for comparison. Here are the queries you
> > mentioned in your link above:
> >
> > SELECT COUNT(*) FROM test_table;
> > > SELECT COUNT(id) FROM test_table;
> > > SELECT COUNT(1) FROM test_table;
> > >
> > > The results goes here:
> > >
> > > mysql> SELECT COUNT(*) FROM test_table;
> > > +----------+
> > > | count(*) |
> > > +----------+
> > > | 20795139 |
> > > +----------+
> > > 1 row in set (1 min 8.22 sec)
> > >
> >
> > A count(*) against the entire table does not scan all rows so this
> > should be very fast. In my case, a full table count was about .06
> > seconds.
> >
>
> You're assuming she's using a myisam table - which will indeed be fast.
> Switch to an innodb table (or falcon if you're feeling adventurous) and
> you'll have this issue because they support transactions and are prone to
> MVCC issues.
>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_792_24428823.1206581590553--

Re: Table optimization ideas needed

am 27.03.2008 02:46:16 von skylark

------=_Part_833_31407291.1206582376234
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Does that mean MySQL is slow?

Currently one practice I am using is: get $m = $userId%256, then store
$userId's information in table_$m.
Then the table with more than 20, 000, 000 records is split into 256 tables,
and that can speed up the query.

I want to listen to your opinion about that.

Actually, I wonder how facebook is dealing with this matter. Somebody knows?


On Wed, Mar 26, 2008 at 10:05 PM, Roberto Mansfield
wrote:

> Shelley wrote:
> > Hi all,
> >
> > I made a post a week ago to ask for the idea of the fastest way to get
> > table records.
> > Fyi,
> >
> http://phparch.cn/index.php/mysql/35-MySQL-programming/126-f astest-way-to-get-total-records-from-a-table
> >
>
> Hi Shelly,
>
> I question your mysql database setup. I have a log table with about 2
> million records which I used for comparison. Here are the queries you
> mentioned in your link above:
>
> > SELECT COUNT(*) FROM test_table;
> > SELECT COUNT(id) FROM test_table;
> > SELECT COUNT(1) FROM test_table;
> >
> > The results goes here:
> >
> > mysql> SELECT COUNT(*) FROM test_table;
> > +----------+
> > | count(*) |
> > +----------+
> > | 20795139 |
> > +----------+
> > 1 row in set (1 min 8.22 sec)
>
> A count(*) against the entire table does not scan all rows so this
> should be very fast. In my case, a full table count was about .06 seconds.
>
>
> > mysql> SELECT COUNT(id) FROM test_table;
> > +-----------+
> > | count(id) |
> > +-----------+
> > | 20795139 |
> > +-----------+
> > 1 row in set (1 min 1.45 sec)
>
> This query counts all the rows where id is not null. This DOES require a
> scan of all your table rows and so will be much slower. In my table,
> this type of query ran about 6.6 seconds. Scaling to 10x (for 20 million
> records), this query would take about a minute which compares to your
> results.
>
> > mysql> SELECT COUNT(1) FROM test_table;
> > +----------+
> > | count(1) |
> > +----------+
> > | 20795139 |
> > +----------+
> > 1 row in set (56.67 sec)
>
> This query was very fast in my table as well. Since "1" is never null,
> there is no full table scan.
>
>
> > Then you can imagine how much time sql such as "select a,b from
> > table_name where c='d'" will take.
>
> If c is indexed and the number of resulting rows is "small" then this
> will be fast. BUt if you need to retrieve several hundred thousand rows,
> this will take time to process. In my log table, a count(*) of rows with
> a restriction:
>
> select count(*) from log where username = 'test';
>
> returned 104777 in ~ .4 seconds, but retrieveing all those records (from
> a separate mysql db host) took 15 seconds.
>
> Roberto
>



--
Regards,
Shelley

------=_Part_833_31407291.1206582376234--

Re: Table optimization ideas needed

am 27.03.2008 03:03:16 von dmagick

> mysql> explain select count(*) from message;
> +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
> | id | select_type | table | type | possible_keys | key | key_len
> | ref | rows | Extra |
> +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
> | 1 | SIMPLE | message | index | NULL | status | 1
> | NULL | 23051499 | Using index |
> +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
>
> Three queries return the same results.

That's never going to be fast because you're using innodb tables.

From a previous post:

> Then you can imagine how much time sql such as "select a,b from
> table_name where c='d'" will take.
>
> I have a lot of tables like that. So my questions is:
> What's your practice to optimize tables like that?

I thought that's what you needed help with. ?

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 27.03.2008 03:03:25 von skylark

------=_Part_875_31493827.1206583405601
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Thank you very much, Chris. :)
Fyi,


On Wed, Mar 26, 2008 at 1:27 PM, Chris wrote:

> Shelley wrote:
>
> >
> > +--------------+-----------------------+------+-----+------- ------------+----------------+
> > | Field | Type | Null | Key | Default
> > | Extra |
> >
> > +--------------+-----------------------+------+-----+------- ------------+----------------+
> > | id | int(11) | | PRI | NULL
> > | auto_increment |
> > | owner_id | int(11) | | MUL | 0
> > | |
> > | owner_name | varchar(50) | | |
> > | |
> > | visitor_id | int(11) | | MUL | 0
> > | |
> > | visitor_name | varchar(100) | | |
> > | |
> > | visit_time | timestamp | YES | | CURRENT_TIMESTAMP
> > | |
> > | first_time | int(10) unsigned | | | 0
> > | |
> > | last_time | int(10) unsigned | | MUL | 0
> > | |
> > | visit_num | mediumint(8) unsigned | | | 0
> > | |
> > | status | tinyint(3) unsigned | | MUL | 0
> > | |
> >
> > +--------------+-----------------------+------+-----+------- ------------+----------------+
> >
> > That's the table which has more than 20 million records.
> >
>
> And what query are you running?
>
> What does:
>
> explain your_query_here;
>
> show?

mysql> explain select count(*) from message;
+----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len |
ref | rows | Extra |
+----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
| 1 | SIMPLE | message | index | NULL | status | 1 |
NULL | 23051499 | Using index |
+----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+

Three queries return the same results.


>
>
> I can see indexes on at least owner_id, visitor_id, last_time and status,
> but at least one of those is across multiple columns ('MUL').
>
> Can you show us the index definitions:
>
> show indexes from table_name;
>
> or
>
> show create table table_name;
>
> and just include the indexes at the bottom.


mysql> show indexes from message;
+---------+------------+------------+--------------+-------- -----+-----------+-------------+----------+--------+------+- -----------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---------+------------+------------+--------------+-------- -----+-----------+-------------+----------+--------+------+- -----------+---------+
| message | 0 | PRIMARY | 1 | id | A
| 23051499 | NULL | NULL | | BTREE | |
| message | 1 | owner_id | 1 | owner_id | A
| 922059 | NULL | NULL | | BTREE | |
| message | 1 | visitor_id | 1 | visitor_id | A
| 501119 | NULL | NULL | | BTREE | |
| message | 1 | status | 1 | status | A
| 18 | NULL | NULL | | BTREE | |
| message | 1 | last_time | 1 | last_time | A
| 11525749 | NULL | NULL | | BTREE | |
+---------+------------+------------+--------------+-------- -----+-----------+-------------+----------+--------+------+- -----------+---------+
5 rows in set (1.09 sec)


>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_875_31493827.1206583405601--

Re: Re: Table optimization ideas needed

am 27.03.2008 03:06:56 von dmagick

> Actually, I wonder how facebook is dealing with this matter. Somebody knows?

There's lots of info here:

http://highscalability.com/

about various websites (some using mysql, some using postgres, some
using oracle etc).

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 27.03.2008 03:17:03 von skylark

------=_Part_905_19678190.1206584223509
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Thu, Mar 27, 2008 at 10:03 AM, Chris wrote:

>
> mysql> explain select count(*) from message;
> >
> > +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
> > | id | select_type | table | type | possible_keys | key | key_len
> > | ref | rows | Extra |
> >
> > +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
> > | 1 | SIMPLE | message | index | NULL | status | 1
> > | NULL | 23051499 | Using index |
> >
> > +----+-------------+---------+-------+---------------+------ --+---------+------+----------+-------------+
> >
> > Three queries return the same results.
> >
>
> That's never going to be fast because you're using innodb tables.

Should I change it to MyISAM ones?

>
>
> From a previous post:
>
> > Then you can imagine how much time sql such as "select a,b from
> > table_name where c='d'" will take.
> >
> > I have a lot of tables like that. So my questions is:
> > What's your practice to optimize tables like that?
>
> I thought that's what you needed help with. ?
>
No. That's only part of it. I have a cron job, which get the total visits
often.

>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_905_19678190.1206584223509--

Re: Re: Table optimization ideas needed

am 27.03.2008 03:18:25 von skylark

------=_Part_910_994219.1206584305913
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Thu, Mar 27, 2008 at 10:06 AM, Chris wrote:

>
> Actually, I wonder how facebook is dealing with this matter. Somebody
> > knows?
> >
>
> There's lots of info here:
>
> http://highscalability.com/
>
> about various websites (some using mysql, some using postgres, some using
> oracle etc).
>
Thanks.

>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_910_994219.1206584305913--

Re: Table optimization ideas needed

am 27.03.2008 03:40:59 von dmagick

> That's never going to be fast because you're using innodb tables.
>
> Should I change it to MyISAM ones?

It depends.

Do you need or use transactions? You can't change - myisam doesn't
support them.


> No. That's only part of it. I have a cron job, which get the total
> visits often.

If you are using mysql 5.0+, use a trigger to update a counter:

It'll depend on your data set and what you need to get out of the data
(ie what queries you are running) but if you just need a total or even
total per day, this would be the best.

Might need to do it as a stored procedure so you can see if the date
already exists in that table so you can either do an update or insert,
or maybe you can use replace into without needing to do that check.

See http://dev.mysql.com/doc/refman/5.0/en/using-triggers.html and
http://dev.mysql.com/doc/refman/5.0/en/triggers.html - and look at the
comments:


Triggers can call 'CALL' method.

DROP TRIGGER cinema.TESTTRIGGER;
CREATE TRIGGER cinema.TESTTRIGGER AFTER INSERT ON film FOR EACH ROW
CALL cinema.CHECKFILM('Old boy');

It works. The trigger is perform and do its job. The only request is to
use the word 'AFTER' when you call your trigger. 'BEFORE' doesn't work
(the trigger is called ans work but the insert failed in all case).



--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 27.03.2008 05:56:18 von skylark

------=_Part_1205_20568937.1206593778812
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Thu, Mar 27, 2008 at 10:40 AM, Chris wrote:

>
> That's never going to be fast because you're using innodb tables.
> >
> > Should I change it to MyISAM ones?
> >
>
> It depends.
>
> Do you need or use transactions? You can't change - myisam doesn't support
> them.
>

I haven't tried transactions so far. :(

>
>
>
> No. That's only part of it. I have a cron job, which get the total visits
> > often.
> >
>
> If you are using mysql 5.0+, use a trigger to update a counter:
>
It is 5.0+.


>
> It'll depend on your data set and what you need to get out of the data (ie
> what queries you are running) but if you just need a total or even total per
> day, this would be the best.
>

Good idea. But I wonder whether calling the trigger each insert will loose
any performance.


>
> Might need to do it as a stored procedure so you can see if the date
> already exists in that table so you can either do an update or insert, or
> maybe you can use replace into without needing to do that check.
>
> See http://dev.mysql.com/doc/refman/5.0/en/using-triggers.html and
> http://dev.mysql.com/doc/refman/5.0/en/triggers.html - and look at the
> comments:
>
>
> Triggers can call 'CALL' method.
>
> DROP TRIGGER cinema.TESTTRIGGER;
> CREATE TRIGGER cinema.TESTTRIGGER AFTER INSERT ON film FOR EACH ROW
> CALL cinema.CHECKFILM('Old boy');
>
> It works. The trigger is perform and do its job. The only request is to
> use the word 'AFTER' when you call your trigger. 'BEFORE' doesn't work (the
> trigger is called ans work but the insert failed in all case).
>
>
>
>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_1205_20568937.1206593778812--

Re: Table optimization ideas needed

am 27.03.2008 06:55:25 von dmagick

> Good idea. But I wonder whether calling the trigger each insert will
> loose any performance.

It's going to affect things slightly but whether it'll be noticable only
you can answer by testing.

Another option I sometimes see is set up a replicated slave and run your
reports off that instead of the live system, then:

1) it won't bog the live db down
2) it doesn't really matter how many queries you run
3) it doesn't really matter how long they take to run

--
Postgresql & php tutorials
http://www.designmagick.com/

--
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Re: Table optimization ideas needed

am 28.03.2008 02:41:36 von skylark

------=_Part_6195_30141341.1206668496258
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

On Thu, Mar 27, 2008 at 1:55 PM, Chris wrote:

> Good idea. But I wonder whether calling the trigger each insert will loose
> > any performance.
> >
>
> It's going to affect things slightly but whether it'll be noticable only
> you can answer by testing.
>
> Another option I sometimes see is set up a replicated slave and run your
> reports off that instead of the live system, then:
>
> 1) it won't bog the live db down
> 2) it doesn't really matter how many queries you run
> 3) it doesn't really matter how long they take to run
>

Hmmmmm... This makes sense. :)

I indeed have several slaves running.

Great. Thank you.


Thank you for all your help.

>
>
> --
> Postgresql & php tutorials
> http://www.designmagick.com/
>



--
Regards,
Shelley

------=_Part_6195_30141341.1206668496258--