Do not convert MySQL system tables in the
mysql
database from MyISAM
to InnoDB
tables! This is an unsupported
operation. If you do this, MySQL does not restart until you
restore the old system tables from a backup or re-generate them
with the mysql_install_db script.
It is not a good idea to configure InnoDB
to
use data files or log files on NFS volumes. Otherwise, the files
might be locked by other processes and become unavailable for
use by MySQL.
A table cannot contain more than 1000 columns.
The InnoDB
internal maximum key length is
3500 bytes, but MySQL itself restricts this to 1024 bytes.
Index key prefixes can be up to 767 bytes (255 bytes before
MySQL 4.1.2). See Section 12.1.4, “CREATE INDEX
Syntax”.
The maximum row length, except for variable-length columns
(VARBINARY
,
VARCHAR
,
BLOB
and
TEXT
), is slightly less than
half of a database page. That is, the maximum row length is
about 8000 bytes. LONGBLOB
and
LONGTEXT
columns must be less than 4GB, and the total row length,
including BLOB
and
TEXT
columns, must be less than
4GB.
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 13.2.12.2, “File Space Management”.
On some older operating systems, files must be less than 2GB.
This is not a limitation of InnoDB
itself,
but if you require a large tablespace, you will need to
configure it using several smaller data files rather than one
or a file large data files.
The combined size of the InnoDB
log files
must be less than 4GB.
The minimum tablespace size is 10MB. The maximum tablespace size is four billion database pages (64TB). This is also the maximum size for a table.
InnoDB
tables do not support
FULLTEXT
indexes.
InnoDB
tables do not support spatial data
types.
ANALYZE TABLE
determines index
cardinality (as displayed in the
Cardinality
column of
SHOW INDEX
output) by doing
eight random dives to each of the index trees and updating
index cardinality estimates accordingly. Because these are
only estimates, repeated runs of ANALYZE
TABLE
may produce different numbers. This makes
ANALYZE TABLE
fast on
InnoDB
tables but not 100% accurate because
it does not take all rows into account.
MySQL uses index cardinality estimates only in join
optimization. If some join is not optimized in the right way,
you can try using ANALYZE
TABLE
. In the few cases that
ANALYZE TABLE
does not produce
values good enough for your particular tables, you can use
FORCE INDEX
with your queries to force the
use of a particular index, or set the
max_seeks_for_key
system
variable to ensure that MySQL prefers index lookups over table
scans. See Section 5.1.3, “Server System Variables”, and
Section A.5.6, “Optimizer-Related Issues”.
SHOW TABLE STATUS
does not give
accurate statistics on InnoDB
tables,
except for the physical size reserved by the table. The row
count is only a rough estimate used in SQL optimization.
InnoDB
does not keep an internal count of
rows in a table. (In practice, this would be somewhat
complicated due to multi-versioning.) To process a
SELECT COUNT(*) FROM t
statement,
InnoDB
must scan an index of the table,
which takes some time if the index is not entirely in the
buffer pool. If your table does not change often, using the
MySQL query cache is a good solution. To get a fast count, you
have to use a counter table you create yourself and let your
application update it according to the inserts and deletes it
does. SHOW TABLE STATUS
also
can be used if an approximate row count is sufficient. See
Section 13.2.14.1, “InnoDB
Performance Tuning Tips”.
On Windows, InnoDB
always stores database
and table names internally in lowercase. To move databases in
a binary format from Unix to Windows or from Windows to Unix,
you should create all databases and tables using lowercase
names.
For an AUTO_INCREMENT
column, you must
always define an index for the table, and that index must
contain just the AUTO_INCREMENT
column. In
MyISAM
tables, the
AUTO_INCREMENT
column may be part of a
multi-column index.
Before MySQL 4.1.12, InnoDB
does not
support the AUTO_INCREMENT
table option for
setting the initial sequence value in an
ALTER TABLE
statement. Before
MySQL 4.1.14, the same is true for CREATE
TABLE
. To set the value with
InnoDB
, insert a dummy row with a value one
less and delete that dummy row, or insert the first row with
an explicit value specified.
While initializing a previously specified
AUTO_INCREMENT
column on a table,
InnoDB
sets an exclusive lock on the end of
the index associated with the
AUTO_INCREMENT
column. In accessing the
auto-increment counter, InnoDB
uses a
specific table lock mode AUTO-INC
where the
lock lasts only to the end of the current SQL statement, not
to the end of the entire transaction. Other clients cannot
insert into the table while the AUTO-INC
table lock is held; see
Section 13.2.5.3, “AUTO_INCREMENT
Handling in InnoDB
”.
When you restart the MySQL server, InnoDB
may reuse an old value that was generated for an
AUTO_INCREMENT
column but never stored
(that is, a value that was generated during an old transaction
that was rolled back).
When an AUTO_INCREMENT
column runs out of
values, InnoDB
wraps a
BIGINT
to
-9223372036854775808
and BIGINT
UNSIGNED
to 1
. However,
BIGINT
values have 64 bits, so
if you were to insert one million rows per second, it would
still take nearly three hundred thousand years before
BIGINT
reached its upper bound.
With all other integer type columns, a duplicate-key error
results. This is similar to how MyISAM
works, because it is mostly general MySQL behavior and not
about any storage engine in particular.
DELETE FROM
does not
regenerate the table but instead deletes all rows, one by one.
tbl_name
Under some conditions, TRUNCATE
for an
tbl_name
InnoDB
table is mapped to DELETE
FROM
and does
not reset the tbl_name
AUTO_INCREMENT
counter. See
Section 12.2.9, “TRUNCATE TABLE
Syntax”.
Before MySQL 4.0.14 or 4.1.0, if you tried to create a unique index on a prefix of a column you got an error:
CREATE TABLE T (A CHAR(20), B INT, UNIQUE (A(5))) TYPE = InnoDB;
If you created a nonunique index on a prefix of a column,
InnoDB
created an index over the whole
column. These restrictions were removed in MySQL 4.0.14.
Before MySQL 4.0.20 or 4.1.2, the MySQL
LOCK TABLES
operation does not
know about InnoDB
row-level locks set by
completed SQL statements. This means that you can get a table
lock on a table even if there still exist transactions by
other users who have row-level locks on the same table. Thus,
your operations on the table may have to wait if they collide
with these locks of other users. Also a deadlock is possible.
However, this does not endanger transaction integrity, because
the row-level locks set by InnoDB
always
take care of the integrity. Also, a table lock prevents other
transactions from acquiring more row-level locks (in a
conflicting lock mode) on the table.
Beginning with MySQL 4.0.20 and 4.1.2, the MySQL
LOCK TABLES
operation acquires
two locks on each table if
innodb_table_locks=1
(the default). In
addition to a table lock on the MySQL layer, it also acquires
an InnoDB
table lock. Older versions of
MySQL do not acquire InnoDB
table locks.
Beginning with MySQL 4.0.22 and 4.1.7, the old behavior can be
selected by setting innodb_table_locks=0
.
If no InnoDB
table lock is acquired,
LOCK TABLES
completes even if
some records of the tables are being locked by other
transactions.
All InnoDB
locks held by a transaction are
released when the transaction is committed or aborted. Thus,
it does not make much sense to invoke
LOCK TABLES
on
InnoDB
tables in
autocommit = 1
mode, because
the acquired InnoDB
table locks would be
released immediately.
Sometimes it would be useful to lock further tables in the
course of a transaction. Unfortunately,
LOCK TABLES
in MySQL performs
an implicit COMMIT
and
UNLOCK
TABLES
. An InnoDB
variant of
LOCK TABLES
has been planned
that can be executed in the middle of a transaction.
Before MySQL 3.23.52, replication always ran with autocommit enabled. Therefore consistent reads in the slave would also see partially processed transactions, and thus the read would not be really consistent in the slave. This restriction was removed in MySQL 3.23.52.
The LOAD TABLE FROM MASTER
statement for setting up replication slave servers does not
work for InnoDB
tables. A workaround is to
alter the table to MyISAM
on the master,
then do the load, and after that alter the master table back
to InnoDB
. Do not do this if the tables use
InnoDB
-specific features such as foreign
keys.
The default database page size in InnoDB
is
16KB. By recompiling the code, you can set it to values
ranging from 8KB to 64KB. You must update the values of
UNIV_PAGE_SIZE
and
UNIV_PAGE_SIZE_SHIFT
in the
univ.i
source file.
Changing the page size is not a supported operation and
there is no guarantee that
InnoDB
will function normally
with a page size other than 16KB. Problems compiling or
running InnoDB may occur.
A version of InnoDB
built for
one page size cannot use data files or log files from a
version built for a different page size.
You cannot create a table with a column name that matches the
name of an internal InnoDB column (including
DB_ROW_ID
, DB_TRX_ID
,
DB_ROLL_PTR
, and
DB_MIX_ID
). In versions of MySQL before
4.1.19 this would cause a crash, since 4.1.19 the server will
report error 1005 and refers to error –1 in the error
message. This limitation applies only to use of the names in
uppercase.
InnoDB
has a limit of 1023 concurrent
transactions that have created undo records by modifying data.
Workarounds include keeping transactions as small and fast as
possible and delaying changes until near the end of the
transaction. Applications should commit transactions before
doing time-consuming client-side operations.
User Comments
Add your own comment.