This section contains unified change history highlights for all
MySQL Cluster releases based on version 6.3 of the
NDBCLUSTER
storage engine through
MySQL Cluster NDB 6.3.34. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
Cluster API.
For an overview of features that were added in MySQL Cluster NDB 6.3, see Section 17.1.4.4, “MySQL Cluster Development in MySQL Cluster NDB 6.3”.
Changes in MySQL Cluster NDB 6.3.30 (5.1.39-ndb-6.3.30)
Functionality added or changed:
Added multi-threaded ordered index building capability during
system restarts or node restarts, controlled by the
BuildIndexThreads
data node configuration
parameter (also introduced in this release).
Changes in MySQL Cluster NDB 6.3.29 (5.1.39-ndb-6.3.29)
Functionality added or changed:
This enhanced functionality is supported for upgrades to MySQL
Cluster NDB 7.0 when the NDB
engine
version is 7.0.10 or later.
(Bug#48528, Bug#49163)
The output from ndb_config
--configinfo
--xml
now indicates, for each
configuration parameter, the following restart type information:
Whether a system restart or a node restart is required when resetting that parameter;
Whether cluster nodes need to be restarted using the
--initial
option when resetting the
parameter.
Bugs fixed:
Node takeover during a system restart occurs when the REDO log for one or more data nodes is out of date, so that a node restart is invoked for that node or those nodes. If this happens while a mysqld process is attached to the cluster as an SQL node, the mysqld takes a global schema lock (a row lock), while trying to set up cluster-internal replication.
However, this setup process could fail, causing the global schema lock to be held for an excessive length of time, which made the node restart hang as well. As a result, the mysqld failed to set up cluster-internal replication, which led to tables being read-only, and caused one node to hang during the restart.
This issue could actually occur in MySQL Cluster NDB 7.0 only, but the fix was also applied MySQL Cluster NDB 6.3, in order to keep the two codebases in alignment.
Sending SIGHUP
to a mysqld
running with the --ndbcluster
and
--log-bin
options caused the
process to crash instead of refreshing its log files.
(Bug#49515)
If the master data node receiving a request from a newly-started API or data node for a node ID died before the request has been handled, the management server waited (and kept a mutex) until all handling of this node failure was complete before responding to any other connections, instead of responding to other connections as soon as it was informed of the node failure (that is, it waited until it had received a NF_COMPLETEREP signal rather than a NODE_FAILREP signal). On visible effect of this misbehavior was that it caused management client commands such as SHOW and ALL STATUS to respond with unnecessary slowness in such circumstances. (Bug#49207)
When evaluating the options
--include-databases
,
--include-tables
,
--exclude-databases
, and
--exclude-tables
, the
ndb_restore program overwrote the result of
the database-level options with the result of the table-level
options rather than merging these results together, sometimes
leading to unexpected and unpredictable results.
As part of the fix for this problem, the semantics of these options have been clarified; because of this, the rules governing their evaluation have changed slightly. These changes be summed up as follows:
All --include-*
and
--exclude-*
options are now evaluated from
right to left in the order in which they are passed to
ndb_restore.
All --include-*
and
--exclude-*
options are now cumulative.
In the event of a conflict, the first (rightmost) option takes precedence.
For more detailed information and examples, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”. (Bug#48907)
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug#48861)
Exhaustion of send buffer memory or long signal memory caused data nodes to crash. Now an appropriate error message is provided instead when this situation occurs. (Bug#48852)
Under certain conditions, accounting of the number of free scan records in the local query handler could be incorrect, so that during node recovery or a local checkpoint operations, the LQH could find itself lacking a scan record that is expected to find, causing the node to crash. (Bug#48697)
See also Bug#48564.
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug#48604)
During an LCP master takeover, when the newly elected master did
not receive a COPY_GCI
LCP protocol message
but other nodes participating in the local checkpoint had
received one, the new master could use an uninitialized
variable, which caused it to crash.
(Bug#48584)
When running many parallel scans, a local checkpoint (which performs a scan internally) could find itself not getting a scan record, which led to a data node crash. Now an extra scan record is reserved for this purpose, and a problem with obtaining the scan record returns an appropriate error (error code 489, Too many active scans). (Bug#48564)
During a node restart, logging was enabled on a per-fragment
basis as the copying of each fragment was completed but local
checkpoints were not enabled until all fragments were copied,
making it possible to run out of redo log file space
(NDB
error code 410) before the
restart was complete. Now logging is enabled only after all
fragments has been copied, just prior to enabling local
checkpoints.
(Bug#48474)
When employing NDB
native backup to
back up and restore an empty NDB
table that used a non-sequential
AUTO_INCREMENT
value, the
AUTO_INCREMENT
value was not restored
correctly.
(Bug#48005)
ndb_config --xml
--configinfo
now indicates that
parameters belonging in the [SCI]
,
[SCI DEFAULT]
, [SHM]
, and
[SHM DEFAULT]
sections of the
config.ini
file are deprecated or
experimental, as appropriate.
(Bug#47365)
NDB
stores blob column data in a
separate, hidden table that is not accessible from MySQL. If
this table was missing for some reason (such as accidental
deletion of the file corresponding to the hidden table) when
making a MySQL Cluster native backup, ndb_restore crashed when
attempting to restore the backup. Now in such cases, ndb_restore
fails with the error message Table
table_name
has blob column
(column_name
) with missing parts
table in backup instead.
(Bug#47289)
DROP DATABASE
failed when there
were stale temporary NDB
tables in
the database. This situation could occur if
mysqld crashed during execution of a
DROP TABLE
statement after the
table definition had been removed from
NDBCLUSTER
but before the
corresponding .ndb
file had been removed
from the crashed SQL node's data directory. Now, when
mysqld executes DROP
DATABASE
, it checks for these files and removes them
if there are no corresponding table definitions for them found
in NDBCLUSTER
.
(Bug#44529)
Creating an NDB
table with an
excessive number of large BIT
columns caused the cluster to fail. Now, an attempt to create
such a table is rejected with error 791 (Too many
total bits in bitfields).
(Bug#42046)
See also Bug#42047.
When a long-running transaction lasting long enough to cause
Error 410 (REDO log files overloaded) was
later committed or rolled back, it could happen that
NDBCLUSTER
was not able to release
the space used for the REDO log, so that the error condition
persisted indefinitely.
The most likely cause of such transactions is a bug in the application using MySQL Cluster. This fix should handle most cases where this might occur. (Bug#36500)
Deprecation and usage information obtained from
ndb_config --configinfo
regarding the PortNumber
and
ServerPort
configuration parameters was
improved.
(Bug#24584)
Disk Data: When running a write-intensive workload with a very large disk page buffer cache, CPU usage approached 100% during a local checkpoint of a cluster containing Disk Data tables. (Bug#49532)
Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug#45794, Bug#48910)
Disk Data:
When the FileSystemPathUndoFiles
configuration parameter was set to an non-existent path, the
data nodes shut down with the generic error code 2341
(Internal program error). Now in such
cases, the error reported is error 2815 (File not
found).
Cluster API:
When a DML operation failed due to a uniqueness violation on an
NDB
table having more than one
unique index, it was difficult to determine which constraint
caused the failure; it was necessary to obtain an
NdbError
object, then decode its
details
property, which in could lead to
memory management issues in application code.
To help solve this problem, a new API method
Ndb::getNdbErrorDetail()
is added, providing
a well-formatted string containing more precise information
about the index that caused the unque constraint violation. The
following additional changes are also made in the NDB API:
Use of NdbError.details
is now deprecated
in favor of the new method.
The NdbDictionary::listObjects()
method
has been modified to provide more information.
For more information, see
Ndb::getNdbErrorDetail()
,
The NdbError
Structure, and
Dictionary::listObjects()
.
(Bug#48851)
Cluster API:
When using blobs, calling getBlobHandle()
requires the full key to have been set using
equal()
, because
getBlobHandle()
must access the key for
adding blob table operations. However, if
getBlobHandle()
was called without first
setting all parts of the primary key, the application using it
crashed. Now, an appropriate error code is returned instead.
(Bug#28116, Bug#48973)
Changes in MySQL Cluster NDB 6.3.28a (5.1.39-ndb-6.3.28a)
Bugs fixed:
When the combined length of all names of tables using the
NDB
storage engine was greater than
or equal to 1024 bytes, issuing the START
BACKUP
command in the ndb_mgm
client caused the cluster to crash.
(Bug#48531)
Changes in MySQL Cluster NDB 6.3.27 (5.1.37-ndb-6.3.27)
Functionality added or changed:
Disk Data:
Two new columns have been added to the output of
ndb_desc to make it possible to determine how
much of the disk space allocated to a given table or fragment
remains free. (This information is not available from the
INFORMATION_SCHEMA.FILES
table,
since the FILES
table applies only
to Disk Data files.) For more information, see
Section 17.4.9, “ndb_desc — Describe NDB Tables”.
(Bug#47131)
Bugs fixed:
mysqld allocated an excessively large buffer
for handling BLOB
values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
BLOB
or
TEXT
column value in the result
set.) This could adversely affect performance when using tables
containing BLOB
or
TEXT
columns; in a few extreme
cases, this issue could also cause the host system to run out of
memory unexpectedly.
(Bug#47574)
NDBCLUSTER
uses a dynamically-allocated
buffer to store BLOB
or
TEXT
column data that is read
from rows in MySQL Cluster tables.
When an instance of the NDBCLUSTER
table
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES
or
ALTER TABLE
), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
Now execute repeatedly a SELECT
on this table, such that the zero-length
LONGTEXT
row is
last, followed by a FLUSH
TABLES
statement (which forces the handler object to
be re-used), as shown here:
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to
the size of the stored
LONGTEXT
value
each time these two statements were executed.
(Bug#47573)
Large transactions involving joins between tables containing
BLOB
columns used excessive
memory.
(Bug#47572)
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug#47505)
When a data node restarts, it first runs the redo log until
reaching the latest restorable global checkpoint; after this it
scans the remainder of the redo log file, searching for entries
that should be invalidated so they are not used in any
subsequent restarts. (It is possible, for example, if restoring
GCI number 25, that there might be entries belonging to GCI 26
in the redo log.) However, under certain rare conditions, during
the invalidation process, the redo log files themselves were not
always closed while scanning ahead in the redo log. In rare
cases, this could lead to MaxNoOfOpenFiles
being exceeded, causing a the data node to crash.
(Bug#47171)
For very large values of MaxNoOfTables
+
MaxNoOfAttributes
, the calculation for
StringMemory
could overflow when creating
large numbers of tables, leading to NDB error 773
(Out of string memory, please modify StringMemory
config parameter), even when
StringMemory
was set to
100
(100 percent).
(Bug#47170)
The default value for the StringMemory
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp
.
(Bug#47166)
Signals from a failed API node could be received after an
API_FAILREQ
signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ
signal is received.
(Bug#47039)
See also Bug#44607.
Using triggers on NDB
tables caused
ndb_autoincrement_prefetch_sz
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
(Bug#46712)
Running an ALTER TABLE
statement
while an NDB backup was in progress caused
mysqld to crash.
(Bug#44695)
When performing auto-discovery of tables on individual SQL
nodes, NDBCLUSTER
attempted to overwrite
existing MyISAM
.frm
files and corrupted them.
Workaround.
In the mysql client, create a new table
(t2
) with same definition as the corrupted
table (t1
). Use your system shell or file
manager to rename the old .MYD
file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
TO t1
).
Running ndb_restore with the
--print
or --print_log
option
could cause it to crash.
(Bug#40428, Bug#33040)
An insert on an NDB
table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID()
sometimes failed
to return correct values when using a trigger on an
NDB
table.
(Bug#38034)
When a data node received a TAKE_OVERTCCONF
signal from the master before that node had received a
NODE_FAILREP
, a race condition could in
theory result.
(Bug#37688)
Some joins on large NDB
tables
having TEXT
or
BLOB
columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
TEXT
or
BLOB
columns directly for this
issue to occur.
(Bug#36701)
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
--execute
(or
-e
) option from the system shell worked
normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
ndb_mgm>SHOW
Warning, event thread startup failed, degraded printouts as result, errno=36^C
However, the same management client command, invoked from the system shell as shown here, worked correctly:
shell> ndb_mgm -e "SHOW"
See also Bug#34438.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILES
table,
as it applied to extents allocated to a fragment, and not to a
file.
Cluster API:
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
startTransaction()
and
closeTransaction()
could cause the same
transaction to be started and closed repeatedly.
(Bug#47329)
Cluster API:
Performing multiple operations using the same primary key within
the same
NdbTransaction::execute()
call could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
execute()
is not supported; because there
is no way to determine the order of such operations, the
result of such combined operations remains undefined.
See also Bug#44015.
Changes in MySQL Cluster NDB 6.3.26 (5.1.35-ndb-6.3.26)
Functionality added or changed:
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use CLOCK_REALTIME
as the default clock.
(Bug#46183)
A new option --exclude-missing-columns
has been
added for the ndb_restore program. In the
event that any tables in the database or databases being
restored to have fewer columns than the same-named tables in the
backup, the extra columns in the backup's version of the
tables are ignored. For more information, see
Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”.
(Bug#43139)
This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.
Bugs fixed:
Restarting the cluster following a local checkpoint and an
online ALTER TABLE
on a non-empty
table caused data nodes to crash.
(Bug#46651)
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE
), divided by the number
of replicas. Thus, when NoOfReplicas = 1
at
least 3 data nodes were required to trigger this issue, and
when NoOfReplicas = 2
at least 4 data nodes
were required to do so.
Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.
The exact sequence of events causing this issue was as follows:
Local checkpoint occurs.
Immediately following the LCP, kill the master data node.
Kill the remaining data nodes within a few seconds of killing the master.
Attempt to restart the cluster.
Ending a line in the config.ini
file with
an extra semicolon character (;
) caused
reading the file to fail with a parsing error.
(Bug#46242)
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug#46069)
OPTIMIZE TABLE
on an
NDB
table could in some cases cause
SQL and data nodes to crash. This issue was observed with both
ndbd and ndbmtd.
(Bug#45971)
The AutoReconnect
configuration parameter for
API nodes (including SQL nodes) has been added. This is intended
to prevent API nodes from re-using allocated node IDs during
cluster restarts. For more information, see
Section 17.3.2.7, “Defining SQL and Other API Nodes in a MySQL Cluster”.
This fix also introduces two new methods of the
Ndb_cluster_connection
class in the NDB API.
For more information, see
Ndb_cluster_connection::set_auto_reconnect()
,
and
Ndb_cluster_connection::get_auto_reconnect()
.
(Bug#45921)
The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug#45646)
Problems could arise when using
VARCHAR
columns
whose size was greater than 341 characters and which used the
utf8_unicode_ci
collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE
statements to
crash mysqld.
(Bug#45053)
An internal NDB API buffer was not properly initialized. (Bug#44977)
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with filesystem errors. (Bug#44952)
The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug#44563)
In some cases, OPTIMIZE TABLE
on
an NDB
table did not free any
DataMemory
.
(Bug#43683)
If the cluster crashed during the execution of a
CREATE LOGFILE GROUP
statement,
the cluster could not be restarted afterwards.
(Bug#36702)
See also Bug#34102.
Disk Data: Partitioning:
An NDBCLUSTER
table created with a
very large value for the MAX_ROWS
option
could — if this table was dropped and a new table with
fewer partitions, but having the same table ID, was created
— cause ndbd to crash when performing a
system restart. This was because the server attempted to examine
each partition whether or not it actually existed.
(Bug#45154)
Disk Data:
If the value set in the config.ini
file for
FileSystemPathDD
,
FileSystemPathDataFiles
, or
FileSystemPathUndoFiles
was identical to the
value set for FileSystemPath
, that parameter
was ignored when starting the data node with
--initial
option. As a result, the Disk Data
files in the corresponding directory were not removed when
performing an initial start of the affected data node or data
nodes.
(Bug#46243)
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug#41915)
See also Bug#47832.
Changes in MySQL Cluster NDB 6.3.25 (5.1.34-ndb-6.3.25)
Functionality added or changed:
Two new server status variables
Ndb_scan_count
and
Ndb_pruned_scan_count
have
been introduced.
Ndb_scan_count
gives the
number of scans executed since the cluster was last started.
Ndb_pruned_scan_count
gives
the number of scans for which
NDBCLUSTER
was able to use
partition pruning. Together, these variables can be used to help
determine in the MySQL server whether table scans are pruned by
NDBCLUSTER
.
(Bug#44153)
The ndb_config utility program can now
provide an offline dump of all MySQL Cluster configuration
parameters including information such as default and permitted
values, brief description, and applicable section of the
config.ini
file. A dump in text format is
produced when running ndb_config with the new
--configinfo
option, and in XML format when the
options --configinfo --xml
are used together.
For more information and examples, see
Section 17.4.6, “ndb_config — Extract MySQL Cluster Configuration Information”.
Bugs fixed:
Important Change: Partitioning:
User-defined partitioning of an
NDBCLUSTER
table without any
primary key sometimes failed, and could cause
mysqld to crash.
Now, if you wish to create an
NDBCLUSTER
table with user-defined
partitioning, the table must have an explicit primary key, and
all columns listed in the partitioning expression must be part
of the primary key. The hidden primary key used by the
NDBCLUSTER
storage engine is not
sufficient for this purpose. However, if the list of columns is
empty (that is, the table is defined using PARTITION BY
[LINEAR] KEY()
), then no explicit primary key is
required.
This change does not effect the partitioning of tables using any
storage engine other than
NDBCLUSTER
.
(Bug#40709)
Important Change:
Previously, the configuration parameter
NoOfReplicas
had no default value. Now the
default for NoOfReplicas
is 2, which is the
recommended value in most settings.
(Bug#44746)
Packaging:
The pkg
installer for MySQL Cluster on
Solaris did not perform a complete installation due to an
invalid directory reference in the post-install script.
(Bug#41998)
When ndb_config could not find the file
referenced by the --config-file
option, it
tried to read my.cnf
instead, then failed
with a misleading error message.
(Bug#44846)
When a data node was down so long that its most recent local checkpoint depended on a global checkpoint that was no longer restorable, it was possible for it to be unable to use optimized node recovery when being restarted later. (Bug#44844)
See also Bug#26913.
ndb_config
--xml
did not output any entries for the HostName
parameter. In addition, the default listed for
MaxNoOfFiles
was outside the allowed range of
values.
(Bug#44749)
The output of ndb_config
--xml
did not provide information about all sections of the
configuration file.
(Bug#44685)
Inspection of the code revealed that several assignment
operators (=
) were used in place of
comparison operators (==
) in
DbdihMain.cpp
.
(Bug#44567)
See also Bug#44570.
It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug#44132)
ndb_restore failed when trying to restore data on a big-endian machine from a backup file created on a little-endian machine. (Bug#44069)
The file ndberror.c
contained a C++-style
comment, which caused builds to fail with some C compilers.
(Bug#44036)
When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug#43699)
In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug#43156)
NDBCLUSTER
did not build correctly
on Solaris 9 platforms.
(Bug#39080)
ndb_restore --print_data
did
not handle DECIMAL
columns
correctly.
(Bug#37171)
The output of ndbd --help
did not provide clear information about the program's
--initial
and --initial-start
options.
(Bug#28905)
It was theoretically possible for the value of a nonexistent
column to be read as NULL
, rather than
causing an error.
(Bug#27843)
Disk Data: This fix supercedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug#24521)
Changes in MySQL Cluster NDB 6.3.24 (5.1.32-ndb-6.3.24)
Bugs fixed:
Cluster Replication: If data node failed during an event creation operation, there was a slight risk that a surviving data node could send an invalid table reference back to NDB, causing the operation to fail with a false Error 723 (No such table). This could take place when a data node failed as a mysqld process was setting up MySQL Cluster Replication. (Bug#43754)
Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.
As part of the fix for this issue, several improvements have
been made in the NDB API, including the addition of a new
NdbScanOperation::getPruned()
method, a new
variant of NdbIndexScanOperation::setBound()
,
and a new Ndb::PartitionSpec
data structure.
For more information about these changes, see
NdbScanOperation::getPruned()
,
NdbIndexScanOperation::setBound
, and
The PartitionSpec
Structure.
(Bug#37934)
Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.
As part of the fix for this issue, several improvements have
been made in the NDB API, including the addition of a new
NdbScanOperation::getPruned()
method, a new
variant of NdbIndexScanOperation::setBound()
,
and a new Ndb::PartitionSpec
data structure.
For more information about these changes, see
NdbScanOperation::getPruned()
,
NdbIndexScanOperation::setBound
, and
The PartitionSpec
Structure.
(Bug#37934)
TransactionDeadlockDetectionTimeout
values
less than 100 were treated as 100. This could cause scans to
time out unexpectedly.
(Bug#44099)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug#43888)
TimeBetweenLocalCheckpoints
was measured from
the end of one local checkpoint to the beginning of the next,
rather than from the beginning of one LCP to the beginning of
the next. This meant that the time spent performing the LCP was
not taken into account when determining the
TimeBetweenLocalCheckpoints
interval, so that
LCPs were not started often enough, possibly causing data nodes
to run out of redo log space prematurely.
(Bug#43567)
Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug#43226)
When a data node process had been killed after allocating a node ID, but before making contact with any other data node processes, it was not possible to restart it due to a node ID allocation failure.
This issue could effect either ndbd or ndbmtd processes. (Bug#43224)
This regression was introduced by Bug#42973.
Some queries using combinations of logical and comparison
operators on an indexed column in the WHERE
clause could fail with the error Got error 4541
'IndexBound has no bound information' from
NDBCLUSTER.
(Bug#42857)
ndb_restore crashed when trying to restore a backup made to a MySQL Cluster running on a platform having different endianness from that on which the original backup was taken. (Bug#39540)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.
As a result of this fix, the minimum allowed
INTIAL_SIZE
for an undo log file is now
1M
(1 megabyte).
(Bug#29574)
Cluster API:
If the largest offset of a
RecordSpecification
used for an
NdbRecord
object was for the NULL
bits (and thus not a
column), this offset was not taken into account when calculating
the size used for the RecordSpecification
.
This meant that the space for the NULL
bits
could be overwritten by key or other information.
(Bug#43891)
Cluster API:
BIT
columns created using the
native NDB API format that were not created as nullable could
still sometimes be overwritten, or cause other columns to be
overwritten.
This issue did not effect tables having
BIT
columns created using the
mysqld format (always used by MySQL Cluster SQL nodes).
(Bug#43802)
Cluster API:
The default NdbRecord
structures created by
NdbDictionary
could have overlapping null
bits and data fields.
(Bug#43590)
Cluster API:
When performing insert or write operations,
NdbRecord
allows key columns to be specified
in both the key record and in the attribute record. Only one key
column value for each key column should be sent to the NDB
kernel, but this was not guaranteed. This is now ensured as
follows: For insert and write operations, key column values are
taken from the key record; for scan takeover update operations,
key column values are taken from the attribute record.
(Bug#42238)
Cluster API:
Ordered index scans using NdbRecord
formerly
expressed a BoundEQ
range as separate lower
and upper bounds, resulting in 2 copies of the column values
being sent to the NDB kernel.
Now, when a range is specified by
NdbScanOperation::setBound()
, the passed
pointers, key lengths, and inclusive bits are compared, and only
one copy of the equal key columns is sent to the kernel. This
makes such operations more efficient, as half the amount of
KeyInfo
is now sent for a
BoundEQ
range as before.
(Bug#38793)
Changes in MySQL Cluster NDB 6.3.23 (5.1.32-ndb-6.3.23)
Functionality added or changed:
A new data node configuration parameter
MaxLCPStartDelay
has been introduced to
facilitate parallel node recovery by causing a local checkpoint
to be delayed while recovering nodes are synchronizing data
dictionaries and other meta-information. For more information
about this parameter, see
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”.
(Bug#43053)
Bugs fixed:
Performance:
Updates of the SYSTAB_0
system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0
entry, creating a potential
performance bottleneck.
With this fix, inserts on NDB
tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
(Bug#39268)
Packaging:
Packages for MySQL Cluster were missing the
libndbclient.so
and
libndbclient.a
files.
(Bug#42278)
Partitioning:
Executing ALTER TABLE ... REORGANIZE
PARTITION
on an
NDBCLUSTER
table having only one
partition caused mysqld to crash.
(Bug#41945)
See also Bug#40389.
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug#43042)
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
LockPagesInMainMemory
set to 1.
(Bug#43021)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug#42973)
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug#42753)
Triggers on NDBCLUSTER
tables
caused such tables to become locked.
(Bug#42751)
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using --initial
, and
into which no data had yet been imported) and having an empty
backup directory, executing START BACKUP
with
a user-specified backup ID caused the data nodes to crash.
(Bug#41031)
In some cases, NDB
did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
server.
(Bug#40464)
Disk Data:
It was not possible to add an in-memory column online to a table
that used a table-level or column-level STORAGE
DISK
option. The same issue prevented ALTER
ONLINE TABLE ... REORGANIZE PARTITION
from working on
Disk Data tables.
(Bug#42549)
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug#39096)
Disk Data:
Trying to execute a CREATE LOGFILE
GROUP
statement using a value greater than
150M
for UNDO_BUFFER_SIZE
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE
is now
600M
; attempting to set a higher value now
fails gracefully with an error.
(Bug#34102)
See also Bug#36702.
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug#32662)
Disk Data:
Using a path or filename longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of CREATE
LOGFILE GROUP
, ALTER LOGFILE
GROUP
, CREATE
TABLESPACE
, and ALTER
TABLESPACE
statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug#31769, Bug#31770, Bug#31772)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a CREATE LOGFILE
GROUP
statement without an ADD
UNDOFILE
option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid CREATE
LOGFILE GROUP
statement; it is also possible to
create a logfile group without any undo log files using the
NDB API.
Cluster API:
Some error messages from ndb_mgmd contained
newline (\n
) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
(Bug#43104)
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug#42591)
Changes in MySQL Cluster NDB 6.3.22 (5.1.31-ndb-6.3.22)
Functionality added or changed:
New options are introduced for ndb_restore for determining which tables or databases should be restored:
--include-tables
and
--include-databases
can be used to restore
specific tables or databases.
--exclude-tables
and
--exclude-databases
can be used to exclude
the specified tables or databases from being restored.
For more information about these options, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”. (Bug#40429)
Bugs fixed:
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug#42559)
A data node failure that occurred between calls to
NdbIndexScanOperation::readTuples(SF_OrderBy)
and NdbTransaction::Execute()
was not
correctly handled; a subsequent call to
nextResult()
caused a null pointer to be
deferenced, leading to a segfault in mysqld.
(Bug#42545)
Issuing SHOW GLOBAL STATUS LIKE 'NDB%'
before
mysqld had connected to the cluster caused a
segmentation fault.
(Bug#42458)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug#42422)
When a cluster backup failed with Error 1304 (Node
node_id1
: Backup request from
node_id2
failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug#42354)
See also Bug#22698.
Issuing SHOW ENGINE
NDBCLUSTER STATUS
on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
(Bug#42264)
Changes in MySQL Cluster NDB 6.3.21 (5.1.31-ndb-6.3.21)
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
It is now possible while in Single User Mode to restart all data
nodes using ALL RESTART
in the management
client. Restarting of individual nodes while in Single User Mode
remains disallowed.
(Bug#31056)
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs — that is, epochs in which no
changes to NDBCLUSTER
data or
tables took place — were inserted into the
ndb_apply_status
and
ndb_binlog_index
tables on the slave even
when --log-slave-updates
was
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” eopchs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs
option. For more
information, see Section 16.1.3.3, “Replication Slave Options and Variables”.
See also Bug#37472.
Bugs fixed:
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug#41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug#41462)
It was not possible to perform online upgrades from a MySQL Cluster NDB 6.2 release to MySQL Cluster NDB 6.3.8 or a later MySQL Cluster NDB 6.3 release. (Bug#41435)
Cluster log files were opened twice by internal log-handling code, resulting in a resource leak. (Bug#41362)
An abort path in the DBLQH
kernel block
failed to release a commit acknowledgement marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
(Bug#41296)
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED
signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the DBTC
kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
(Bug#41295)
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
If all data nodes were shut down, MySQL clients were unable to
access NDBCLUSTER
tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
(Bug#33626)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN
kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
(Bug#28077)
Cluster API:
mgmapi.h
contained constructs which only
worked in C++, but not in C.
(Bug#27004)
Changes in MySQL Cluster NDB 6.3.20 (5.1.30-ndb-6.3.20)
Bugs fixed:
If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug#41214)
Issuing SHOW TABLES
repeatedly could cause
NDBCLUSTER
tables to be dropped.
(Bug#40854)
Statements of the form UPDATE ... ORDER BY ...
LIMIT
run against
NDBCLUSTER
tables failed to update
all matching rows, or failed with the error Can't
find record in
'table_name
'.
(Bug#40081)
Start phase reporting was inconsistent between the management client and the cluster log. (Bug#39667)
Status messages shown in the management client when restarting a
management node were inappropriate and misleading. Now, when
restarting a management node, the messages displayed are as
follows, where node_id
is the
management node's node ID:
ndb_mgm>Shutting down MGM node
node_id
RESTARTnode_id
for restart Nodenode_id
is being restarted ndb_mgm>
Disk Data: This improves on a previous fix for this issue that was made in MySQL Cluster 6.3.8. (Bug#37116)
See also Bug#29186.
Cluster API:
When creating a scan using an NdbScanFilter
object, it was possible to specify conditions against a
BIT
column, but the correct rows were not
returned when the scan was executed.
As part of this fix, 4 new comparison operators have been
implemented for use with scans on BIT
columns:
COL_AND_MASK_EQ_MASK
COL_AND_MASK_NE_MASK
COL_AND_MASK_EQ_ZERO
COL_AND_MASK_NE_ZERO
For more information about these operators, see
The NdbScanFilter::BinaryCondition
Type.
Equivalent methods are now also defined for
NdbInterpretedCode
; for more information, see
NdbInterpretedCode
Bitwise Comparison Operations.
(Bug#40535)
Changes in MySQL Cluster NDB 6.3.19 (5.1.29-ndb-6.3.19)
Functionality added or changed:
Cluster API: Important Change: MGM API applications exited without raising any errors if the connection to the management server was lost. The fix for this issue includes two changes:
The MGM API now provides its own
SIGPIPE
handler to catch the
“broken pipe” error that occurs when writing
to a closed or reset socket. This means that MGM API now
behaves the same as NDB API in this regard.
A new function
ndb_mgm_set_ignore_sigpipe()
has been
added to the MGM API. This function makes it possible to
bypass the SIGPIPE
handler provded by
the MGM API.
When performing an initial start of a data node, fragment log
files were always created sparsely — that is, not all
bytes were written. Now it is possible to override this behavior
using the new InitFragmentLogFiles
configuration parameter.
(Bug#40847)
Bugs fixed:
Cluster API:
Failed operations on BLOB
and
TEXT
columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
space.
(Bug#39867, Bug#39879)
Cluster API:
Failed operations on BLOB
and
TEXT
columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
space.
(Bug#39867, Bug#39879)
Undo logs and data files were created in 32K increments. Now these files are created in 512K increments, resulting in shorter creation times. (Bug#40815)
Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug#40734)
Transaction failures took longer to handle than was necessary.
When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol — that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug#40697)
It was theoretically possible for stale data to be read from
NDBCLUSTER
tables when the
transaction isolation level was set to
ReadCommitted
.
(Bug#40543)
The LockExecuteThreadToCPU
and
LockMaintThreadsToCPU
parameters did not work
on Solaris.
(Bug#40521)
SET SESSION ndb_optimized_node_selection = 1
failed with an invalid warning message.
(Bug#40457)
A restarting data node could fail with an error in the
DBDIH
kernel block when a local or global
checkpoint was started or triggered just as the node made a
request for data from another data node.
(Bug#40370)
Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug#40346)
O_DIRECT
was incorrectly disabled when making
MySQL Cluster backups.
(Bug#40205)
Heavy DDL usage caused the mysqld processes
to hang due to a timeout error (NDB
error code 266).
(Bug#39885)
Executing EXPLAIN
SELECT
on an NDBCLUSTER
table could cause mysqld to crash.
(Bug#39872)
Events logged after setting ALL CLUSTERLOG
STATISTICS=15
in the management client did not always
include the node ID of the reporting node.
(Bug#39839)
The MySQL Query Cache did not function correctly with
NDBCLUSTER
tables containing
TEXT
columns.
(Bug#39295)
A segfault in Logger::Log
caused
ndbd to hang indefinitely. This fix improves
on an earlier one for this issue, first made in MySQL Cluster
NDB 6.2.16 and MySQL Cluster NDB 6.3.17.
(Bug#39180)
See also Bug#38609.
Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug#38662)
A duplicate key or other error raised when inserting into an
NDBCLUSTER
table caused the current
transaction to abort, after which any SQL statement other than a
ROLLBACK
failed. With this fix, the
NDBCLUSTER
storage engine now
performs an implicit rollback when a transaction is aborted in
this way; it is no longer necessary to issue an explicit
ROLLBACK
statement, and the next statement that is issued automatically
begins a new transaction.
It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.
See also Bug#47654.
Error messages for NDBCLUSTER
error
codes 1224 and 1227 were missing.
(Bug#28496)
Disk Data:
Issuing concurrent CREATE TABLESPACE
,
ALTER TABLESPACE
, CREATE LOGFILE
GROUP
, or ALTER LOGFILE GROUP
statements on separate SQL nodes caused a resource leak that led
to data node crashes when these statements were used again
later.
(Bug#40921)
Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug#39645)
Disk Data:
O_SYNC
was incorrectly disabled on platforms
that do not support O_DIRECT
. This issue was
noted on Solaris but could have affected other platforms not
having O_DIRECT
capability.
(Bug#34638)
Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug#40455)
Cluster API:
It was not always possible to access parent objects directly
from NdbBlob
,
NdbOperation
, and
NdbScanOperation
objects. To alleviate this
problem, a new getNdbOperation()
method has
been added to NdbBlob
and new
getNdbTransaction() methods have been added to
NdbOperation
and
NdbScanOperation
. In addition, a const
variant of NdbOperation::getErrorLine()
is
now also available.
(Bug#40242)
Cluster API:
NdbScanOperation::getBlobHandle()
failed when
used with incorrect column names or numbers.
(Bug#40241)
Cluster API:
The MGM API function ndb_mgm_listen_event()
ignored bind addresses.
As part of this fix, it is now possible to specify bind addresses in connectstrings. See Section 17.3.2.3, “The MySQL Cluster Connectstring”, for more information. (Bug#38473)
Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug#37491)
See also Bug#40238.
Changes in MySQL Cluster NDB 6.3.18 (5.1.28-ndb-6.3.18)
Functionality added or changed:
It is no longer a requirement for database autodiscovery that an
SQL node already be connected to the cluster at the time that a
database is created on another SQL node. It is no longer
necessary to issue CREATE
DATABASE
(or
CREATE
SCHEMA
) statements on an SQL node joining the cluster
after a database is created in order for the new SQL node to see
the database and any NDCLUSTER
tables that it
contains.
(Bug#39612)
Bugs fixed:
When a transaction included a multi-row insert to an
NDBCLUSTER
table that caused a
constraint violation, the transaction failed to roll back.
(Bug#395638)
Starting the MySQL Server with the
--ndbcluster
option plus an
invalid command-line option (for example, using
mysqld --ndbcluster
--foobar
) caused it to hang while shutting down the
binlog thread.
(Bug#39635)
Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug#39613)
Setting a low value of MaxNoOfLocalScans
(< 100) and performing a large number of (certain) scans
could cause the Transaction Coordinator to run out of scan
fragment records, and then crash. Now when this resource is
exhausted, the cluster returns Error 291 (Out of
scanfrag records in TC (increase MaxNoOfLocalScans))
instead.
(Bug#39549)
Creating a unique index on an
NDBCLUSTER
table caused a memory
leak in the NDB
subscription
manager (SUMA
) which could lead to mysqld
hanging, due to the fact that the resource shortage was not
reported back to the NDB
kernel
correctly.
(Bug#39518)
See also Bug#39450.
Embedded libmysqld with
NDB
did not drop table events.
(Bug#39450)
Unique identifiers in tables having no primary key were not
cached. This fix has been observed to increase the efficiency of
INSERT
operations on such tables
by as much as 50%.
(Bug#39267)
When restarting a data node, an excessively long shutodwn message could cause the node process to crash. (Bug#38580)
After a forced shutdown and initial restart of the cluster, it
was possible for SQL nodes to retain .frm
files corresponding to NDBCLUSTER
tables that had been dropped, and thus to be unaware that these
tables no longer existed. In such cases, attempting to re-create
the tables using CREATE TABLE IF NOT EXISTS
could fail with a spurious Table ... doesn't
exist error.
(Bug#37921)
A statement of the form DELETE FROM
or table
WHERE
primary_key
=value
UPDATE
where there was no row whose primary key column had the stated
table
WHERE
primary_key
=value
value
appeared to succeed, with the
server reporting that 1 row had been changed.
This issue was only known to affect MySQL Cluster NDB 6.3.11 and later NDB 6.3 versions. (Bug#37153)
Cluster API:
Passing a value greater than 65535 to
NdbInterpretedCode::add_val()
and
NdbInterpretedCode::sub_val()
caused these
methods to have no effect.
(Bug#39536)
Changes in MySQL Cluster NDB 6.3.17 (5.1.27-ndb-6.3.17)
Bugs fixed:
Packaging:
Support for the InnoDB
storage engine was
missing from the GPL source releases. An updated GPL source
tarball
mysql-5.1.27-ndb-6.3.17-innodb.tar.gz
which
includes code for building InnoDB
can be
found on
the
MySQL FTP site.
MgmtSrvr::allocNodeId()
left a mutex locked
following an Ambiguity for node if %d
error.
(Bug#39158)
An invalid path specification caused mysql-test-run.pl to fail. (Bug#39026)
During transactional coordinator takeover (directly after node
failure), the LQH finding an operation in the
LOG_COMMIT
state sent an
LQH_TRANS_CONF
signal twice, causing the TC
to fail.
(Bug#38930)
An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug#38628)
A segfault in Logger::Log
caused
ndbd to hang indefinitely.
(Bug#38609)
ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug#38592)
ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug#38563)
ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug#38520)
A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug#38424)
Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug#38380)
Renaming an NDBCLUSTER
table on one
SQL node, caused a trigger on this table to be deleted on
another SQL node.
(Bug#36658)
Attempting to add a UNIQUE INDEX
twice to an
NDBCLUSTER
table, then deleting
rows from the table could cause the MySQL Server to crash.
(Bug#35599)
ndb_restore failed when a single table was specified. (Bug#33801)
GCP_COMMIT
did not wait for transaction
takeover during node failure. This could cause
GCP_SAVE_REQ
to be executed too early. This
could also cause (very rarely) replication to skip rows.
(Bug#30780)
Cluster API:
Support for Multi-Range Read index scans using the old API
(using, for example,
NdbIndexScanOperation::setBound()
or
NdbIndexScanOperation::end_of_bound()
) were
dropped in MySQL Cluster NDB 6.2. This functionality is restored
in MySQL Cluster NDB 6.3 beginning with 6.3.17, but remains
unavailable in MySQL Cluster NDB 6.2. Both MySQL Cluster NDB 6.2
and 6.3 support Multi-Range Read scans via the
NdbRecord
API.
(Bug#38791)
Cluster API:
The NdbScanOperation::readTuples()
method
could be called multiple times without error.
(Bug#38717)
Cluster API:
Certain Multi-Range Read scans involving IS
NULL
and IS NOT NULL
comparisons
failed with an error in the NDB
local query handler.
(Bug#38204)
Cluster API:
Problems with the public headers prevented
NDB
applications from being built
with warnings turned on.
(Bug#38177)
Cluster API:
Creating an NdbScanFilter
object using an
NdbScanOperation
object that had not yet had
its readTuples()
method called resulted in a
crash when later attempting to use the
NdbScanFilter
.
(Bug#37986)
Cluster API:
Executing an NdbRecord
interpreted delete
created with an ANYVALUE
option caused the
transaction to abort.
(Bug#37672)
Changes in MySQL Cluster NDB 6.3.16 (5.1.24-ndb-6.3.16)
Functionality added or changed:
Bugs fixed:
Cluster API: Changing the system time on data nodes could cause MGM API applications to hang and the data nodes to crash. (Bug#35607)
Cluster API: Changing the system time on data nodes could cause MGM API applications to hang and the data nodes to crash. (Bug#35607)
Failure of a data node could sometimes cause mysqld to crash. (Bug#37628)
DELETE ... WHERE
deleted the wrong row from the table.
(Bug#37516)unique_index_column
=value
If subscription was terminated while a node was down, the epoch was not properly acknowledged by that node. (Bug#37442)
libmysqld
failed to wait for the cluster
binlog thread to terminate before exiting.
(Bug#37429)
In rare circumstances, a connection followed by a disconnection could give rise to a “stale” connection where the connection still existed but was not seen by the transporter. (Bug#37338)
Queries against NDBCLUSTER
tables
were cached only if autocommit
was in use.
(Bug#36692)
Cluster API:
When some operations succeeded and some failed following a call
to NdbTransaction::execute(Commit,
AO_IgnoreOnError)
, a race condition could cause
spurious occurrences of NDB API Error 4011 (Internal
error).
(Bug#37158)
Cluster API: Creating a table on an SQL node, then starting an NDB API application that listened for events from this table, then dropping the table from an SQL node, prevented data node restarts. (Bug#32949, Bug#37279)
Cluster API:
A buffer overrun in NdbBlob::setValue()
caused erroneous results on Mac OS X.
(Bug#31284)
Changes in MySQL Cluster NDB 6.3.15 (5.1.24-ndb-6.3.15)
Bugs fixed:
In certain rare situations, ndb_size.pl
could fail with the error Can't use string
("value
") as a HASH ref while "strict
refs" in use.
(Bug#43022)
Under some circumstances, a failed CREATE
TABLE
could mean that subsequent
CREATE TABLE
statements caused
node failures.
(Bug#37092)
A fail attempt to create an NDB
table could in some cases lead to resource leaks or cluster
failures.
(Bug#37072)
Attempting to create a native backup of
NDB
tables having a large number of
NULL
columns and data could lead to node
failures.
(Bug#37039)
Checking of API node connections was not efficiently handled. (Bug#36843)
Attempting to delete a nonexistent row from a table containing a
TEXT
or
BLOB
column within a transaction
caused the transaction to fail.
(Bug#36756)
See also Bug#36851.
If the combined total of tables and indexes in the cluster was
greater than 4096, issuing START BACKUP
caused data nodes to fail.
(Bug#36044)
Where column values to be compared in a query were of the
VARCHAR
or
VARBINARY
types,
NDBCLUSTER
passed a value padded to
the full size of the column, which caused unnecessary data to be
sent to the data nodes. This also had the effect of wasting CPU
and network bandwidth, and causing condition pushdown to be
disabled where it could (and should) otherwise have been
applied.
(Bug#35393)
When dropping a table failed for any reason (such as when in
single user mode) then the corresponding
.ndb
file was still removed.
Cluster API: Ordered index scans were not pruned correctly where a partitioning key was specified with an EQ-bound. (Bug#36950)
Cluster API:
When an insert operation involving
BLOB
data was attempted on a row
which already existed, no duplicate key error was correctly
reported and the transaction is incorrectly aborted. In some
cases, the existing row could also become corrupted.
(Bug#36851)
See also Bug#26756.
Cluster API:
NdbApi.hpp
depended on
ndb_global.h
, which was not actually
installed, causing the compilation of programs that used
NdbApi.hpp
to fail.
(Bug#35853)
Changes in MySQL Cluster NDB 6.3.14 (5.1.24-ndb-6.3.14)
Bugs fixed:
SET GLOBAL ndb_extra_logging
caused
mysqld to crash.
(Bug#36547)
A race condition caused by a failure in epoll handling could cause data nodes to fail. (Bug#36537)
Under certain rare circumstances, the failure of the new master node while attempting a node takeover would cause takeover errors to repeat without being resolved. (Bug#36199, Bug#36246, Bug#36247, Bug#36276)
When more than one SQL node connected to the cluster at the same
time, creation of the mysql.ndb_schema
table
failed on one of them with an explicit Table
exists error, which was not necessary.
(Bug#35943)
mysqld failed to start after running mysql_upgrade. (Bug#35708)
Notification of a cascading master node failures could sometimes
not be transmitted correctly (that is, transmission of the
NF_COMPLETEREP
signal could fail), leading to
transactions hanging and timing out
(NDB
error 4012), scans hanging,
and failure of the management server process.
(Bug#32645)
If an API node disconnected and then reconnected during Start
Phase 8, then the connection could be “blocked”
— that is, the QMGR
kernel block failed
to detect that the API node was in fact connected to the
cluster, causing issues with the
NDB
Subscription Manager
(SUMA
).
NDB
error 1427 (Api node
died, when SUB_START_REQ reached node) was
incorrectly classified as a schema error rather than a temporary
error.
Cluster API:
Accesing the debug version of libndbclient
via dlopen()
resulted in a segmentation
fault.
(Bug#35927)
Cluster API:
Attempting to pass a nonexistent column name to the
equal()
and setValue()
methods of NdbOperation
caused NDB API
applications to crash. Now the column name is checked, and an
error is returned in the event that the column is not found.
(Bug#33747)
Cluster API:
Relocation errors were encountered when trying to compile NDB
API applications on a number of platforms, including 64-bit
Linux. As a result, libmysys
,
libmystrings
, and libdbug
have been changed from normal libraries to “noinst”
libtool helper libraries. They are no longer
installed as separate libraries; instead, all necessary symbols
from these are added directly to
libndbclient
. This means that NDB API
programs now need to be linked only using
-lndbclient
.
(Bug#29791)
Changes in MySQL Cluster NDB 6.3.13 (5.1.24-ndb-6.3.13)
Bugs fixed:
Important Change:
mysqld_safe now traps Signal 13
(SIGPIPE
) so that this signal no longer kills
the MySQL server process.
(Bug#33984)
Node or system restarts could fail due an unitialized variable
in the DTUP
kernel block. This issue was
found in MySQL Cluster NDB 6.3.11.
(Bug#35797)
If an error occured while executing a statement involving a
BLOB
or
TEXT
column of an
NDB
table, a memory leak could
result.
(Bug#35593)
It was not possible to determine the value used for the
--ndb-cluster-connection-pool
option in the
mysql client. Now this value is reported as a
system status variable.
(Bug#35573)
The ndb_waiter utility wrongly calculated timeouts. (Bug#35435)
A SELECT
on a table with a
nonindexed, large VARCHAR
column
which resulted in condition pushdown on this column could cause
mysqld to crash.
(Bug#35413)
ndb_restore incorrectly handled some datatypes when applying log files from backups. (Bug#35343)
In some circumstances, a stopped data node was handled incorrectly, leading to redo log space being exhausted following an initial restart of the node, or an initial or partial restart of the cluster (the wrong CGI might be used in such cases). This could happen, for example, when a node was stopped following the creation of a new table, but before a new LCP could be executed. (Bug#35241)
SELECT ... LIKE ...
queries yielded incorrect
results when used on NDB
tables. As
part of this fix, condition pushdown of such queries has been
disabled; re-enabling it is expected to be done as part of a
later, permanent fix for this issue.
(Bug#35185)
ndb_mgmd reported errors to
STDOUT
rather than to
STDERR
.
(Bug#35169)
Nested Multi-Range Read scans failed when the second Multi-Range Read released the first read's unprocessed operations, sometimes leading to an SQL node crash. (Bug#35137)
In some situations, a problem with synchronizing checkpoints between nodes could cause a system restart or a node restart to fail with Error 630 during restore of TX. (Bug#34756)
A node failure during an initial node restart followed by another node start could cause the master data node to fail, because it incorrectly gave the node permission to start even if the invalidated node's LCP was still running. (Bug#34702)
When a secondary index on a
DECIMAL
column was used to
retrieve data from an NDB
table, no
results were returned even if the target table had a matched
value in the column that was defined with the secondary index.
(Bug#34515)
An UPDATE
on an
NDB
table that set a new value for
a unique key column could cause subsequent queries to fail.
(Bug#34208)
If a data node in one node group was placed in the “not
started” state (using
), it was not possible to stop a data node in a
different node group.
(Bug#34201)node_id
RESTART
-n
Numerous NDBCLUSTER
test failures
occurred in builds compiled using icc on IA64
platforms.
(Bug#31239)
If a START BACKUP
command was issued while
ndb_restore was running, the backup being
restored could be overwritten.
(Bug#26498)
REPLACE
statements did not work
correctly with NDBCLUSTER
tables
when all columns were not explicitly listed.
(Bug#22045)
CREATE TABLE
and
ALTER TABLE
statements using
ENGINE=NDB
or
ENGINE=NDBCLUSTER
caused
mysqld to fail on Solaris 10 for x86
platforms.
(Bug#19911)
Cluster API: Closing a scan before it was executed caused the application to segfault. (Bug#36375)
Cluster API:
Using NDB API applications from older MySQL Cluster versions
with libndbclient
from newer ones caused the
cluster to fail.
(Bug#36124)
Cluster API: Some ordered index scans could return tuples out of order. (Bug#35908)
Cluster API: Scans having no bounds set were handled incorrectly. (Bug#35876)
Cluster API:
NdbScanFilter::getNdbOperation()
, which was
inadvertently removed in MySQL Cluster NDB 6.3.11, has been
restored.
(Bug#35854)
Changes in MySQL Cluster NDB 6.3.10 (5.1.23-ndb-6.3.10)
Bugs fixed:
Due to the reduction of the number of local checkpoints from 3 to 2 in MySQL Cluster NDB 6.3.8, a data node using ndbd from MySQL Cluster NDB 6.3.8 or later started using a file system from an earlier version could incorrectly invalidate local checkpoints too early during the startup process, causing the node to fail. (Bug#34596)
Changes in MySQL Cluster NDB 6.3.9 (5.1.23-ndb-6.3.9)
Bugs fixed:
Cluster failures could sometimes occur when performing more than
three parallel takeovers during node restarts or system
restarts. This affected MySQL Cluster NDB
6.3.x
releases only.
(Bug#34445)
Upgrades of a cluster using while a
DataMemory
setting in excess of 16 GB caused
data nodes to fail.
(Bug#34378)
Performing many SQL statements on
NDB
tables while in
autocommit
mode caused a memory
leak in mysqld.
(Bug#34275)
In certain rare circumstances, a race condition could occur between an aborted insert and a delete leading a data node crash. (Bug#34260)
Multi-table updates using ordered indexes during handling of node failures could cause other data nodes to fail. (Bug#34216)
When configured with NDB
support,
MySQL failed to compile using gcc 4.3 on
64bit FreeBSD systems.
(Bug#34169)
The failure of a DDL statement could sometimes lead to node failures when attempting to execute subsequent DDL statements. (Bug#34160)
Extremely long SELECT
statements
(where the text of the statement was in excess of 50000
characters) against NDB
tables
returned empty results.
(Bug#34107)
When configured with NDB
support,
MySQL failed to compile on 64bit FreeBSD systems.
(Bug#34046)
Statements executing multiple inserts performed poorly on
NDB
tables having
AUTO_INCREMENT
columns.
(Bug#33534)
The ndb_waiter utility polled ndb_mgmd excessively when obtaining the status of cluster data nodes. (Bug#32025)
See also Bug#32023.
Transaction atomicity was sometimes not preserved between reads and inserts under high loads. (Bug#31477)
Having tables with a great many columns could cause Cluster backups to fail. (Bug#30172)
Cluster Replication: Disk Data:
Statements violating unique keys on Disk Data tables (such as
attempting to insert NULL
into a NOT
NULL
column) could cause data nodes to fail. When the
statement was executed from the binlog, this could also result
in failure of the slave cluster.
(Bug#34118)
Disk Data: Updating in-memory columns of one or more rows of Disk Data table, followed by deletion of these rows and re-insertion of them, caused data node failures. (Bug#33619)
Changes in MySQL Cluster NDB 6.3.8 (5.1.23-ndb-6.3.8)
Functionality added or changed:
Cluster API: Important Change:
Because NDB_LE_MemoryUsage.page_size_kb
shows
memory page sizes in bytes rather than kilobytes, it has been
renamed to page_size_bytes
. The name
page_size_kb
is now deprecated and thus
subject to removal in a future release, although it currently
remains supported for reasons of backward compatibility. See
The Ndb_logevent_type
Type, for more information
about NDB_LE_MemoryUsage
.
(Bug#30271)
ndb_restore now supports basic
attribute promotion; that is, data from a
column of a given type can be restored to a column using a
“larger” type. For example, Cluster backup data
taken from a SMALLINT
column can
be restored to a MEDIUMINT
,
INT
, or
BIGINT
column.
For more information, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”.
Now only 2 local checkpoints are stored, rather than 3 as in previous MySQL Cluster versions. This lowers disk space requirements and reduces the size and number of redo log files needed.
The mysqld option
--ndb-batch-size
has been added. This allows
for controlling the size of batches used for running
transactions.
Node recovery can now be done in parallel, rather than sequentially, which can result in much faster recovery times.
Persistence of NDB
tables can now
be controlled using the session variables
ndb_table_temporary
and
ndb_table_no_logging
.
ndb_table_no_logging
causes
NDB
tables not to be checkpointed
to disk; ndb_table_temporary
does the same,
and in addition, no schema files are created.
OPTIMIZE TABLE
can now be
interrupted. This can be done, for example, by killing the SQL
thread performing the OPTIMIZE
operation.
Bugs fixed:
Disk Data: Important Change:
It is no longer possible on 32-bit systems to issue statements
appearing to create Disk Data log files or data files greater
than 4 GB in size. (Trying to create log files or data files
larger than 4 GB on 32-bit systems led to unrecoverable data
node failures; such statements now fail with
NDB
error 1515.)
(Bug#29186)
Replication: The code implementing heartbeats did not check for possible errors in some circumstances; this kept the dump thread hanging while waiting for heartbeats loop even though the slave was no longer connected. (Bug#33332)
High numbers of insert operations, delete operations, or both
could cause NDB
error 899
(Rowid already allocated) to occur
unnecessarily.
(Bug#34033)
A periodic failure to flush the send buffer by the
NDB
TCP transporter could cause a
unnecessary delay of 10 ms between operations.
(Bug#34005)
DROP TABLE
did not free all data
memory. This bug was observed in MySQL Cluster NDB 6.3.7 only.
(Bug#33802)
A race condition could occur (very rarely) when the release of a GCI was followed by a data node failure. (Bug#33793)
Some tuple scans caused the wrong memory page to be accessed, leading to invalid results. This issue could affect both in-memory and Disk Data tables. (Bug#33739)
A failure to initialize an internal variable led to sporadic crashes during cluster testing. (Bug#33715)
The server failed to reject properly the creation of an
NDB
table having an unindexed
AUTO_INCREMENT
column.
(Bug#30417)
Issuing an
INSERT ...
ON DUPLICATE KEY UPDATE
concurrently with or following
a TRUNCATE TABLE
statement on an
NDB
table failed with
NDB
error 4350
Transaction already aborted.
(Bug#29851)
The Cluster backup process could not detect when there was no more disk space and instead continued to run until killed manually. Now the backup fails with an appropriate error when disk space is exhausted. (Bug#28647)
It was possible in config.ini
to define
cluster nodes having node IDs greater than the maximum allowed
value.
(Bug#28298)
Under some circumstances, a recovering data node did not use its own data, instead copying data from another node even when this was not required. This in effect bypassed the optimized node recovery protocol and caused recovery times to be unnecessarily long. (Bug#26913)
Cluster API:
Transactions containing inserts or reads would hang during
NdbTransaction::execute()
calls made from NDB
API applications built against a MySQL Cluster version that did
not support micro-GCPs accessing a later version that supported
micro-GCPs. This issue was observed while upgrading from MySQL
Cluster NDB 6.1.23 to MySQL Cluster NDB 6.2.10 when the API
application built against the earlier version attempted to
access a data node already running the later version, even after
disabling micro-GCPs by setting
TimeBetweenEpochs
equal to 0.
(Bug#33895)
Cluster API:
When reading a BIT(64)
value using
NdbOperation:getValue()
, 12 bytes were
written to the buffer rather than the expected 8 bytes.
(Bug#33750)
Changes in MySQL Cluster NDB 6.3.7 (5.1.23-ndb-6.3.7)
Functionality added or changed:
Compressed local checkpoints and backups are now supported,
resulting in a space savings of 50% or more over uncompressed
LCPs and backups. Compression of these can be enabled in the
config.ini
file using the two new data node
configuration parameters CompressedLCP
and
CompressedBackup
, respectively.
OPTIMIZE TABLE
is now supported
for NDBCLUSTER
tables, subject to
the following limitations:
Only in-memory tables are supported.
OPTIMIZE
still has no effect on Disk Data
tables.
Only variable-length columns are supported. However, you can
force columns defined using fixed-length data types to be
dynamic using the ROW_FORMAT
or
COLUMN_FORMAT
option with a
CREATE TABLE
or
ALTER TABLE
statement.
Memory reclaimed from an NDB
table
using OPTIMIZE
is generally available to the
cluster, and not confined to the table from which it was
recovered, unlike the case with memory freed using
DELETE
.
The performance of OPTIMIZE
on
NDB
tables can be regulated by
adjusting the value of the
ndb_optimization_delay
system variable.
It is now possible to cause statements occurring within the same
transaction to be run as a batch by setting the session variable
transaction_allow_batching
to
1
or ON
.
To use this feature,
autocommit
must be disabled.
Bugs fixed:
Partitioning:
When partition pruning on an NDB
table resulted in an ordered index scan spanning only one
partition, any descending flag for the scan was wrongly
discarded, causing ORDER BY DESC
to be
treated as ORDER BY ASC
,
MAX()
to be handled incorrectly, and similar
problems.
(Bug#33061)
When all data and SQL nodes in the cluster were shut down
abnormally (that is, other than by using STOP
in the cluster management client), ndb_mgm
used excessive amounts of CPU.
(Bug#33237)
When using micro-GCPs, if a node failed while preparing for a global checkpoint, the master node would use the wrong GCI. (Bug#32922)
Under some conditions, performing an ALTER
TABLE
on an NDBCLUSTER
table failed with a Table is full error,
even when only 25% of DataMemory
was in use
and the result should have been a table using less memory (for
example, changing a VARCHAR(100)
column to
VARCHAR(80)
).
(Bug#32670)
Changes in MySQL Cluster NDB 6.3.6 (5.1.22-ndb-6.3.6)
Functionality added or changed:
The output of the ndb_mgm client
SHOW
and STATUS
commands
now indicates when the cluster is in single user mode.
(Bug#27999)
Unnecessary reads when performing a primary key or unique key update have been reduced, and in some cases, eliminated. (It is almost never necessary to read a record prior to an update, the lone exception to this being when a primary key is updated, since this requires a delete followed by an insert, which must be prepared by reading the record.) Depending on the number of primary key and unique key lookups that are performed per transaction, this can yield a considerable improvement in performance.
Batched operations are now better supported for
DELETE
and
UPDATE
. (UPDATE
WHERE...
and muliple
DELETE
.)
Introduced the
Ndb_execute_count
status
variable, which measures the number of round trips made by
queries to the NDB
kernel.
Bugs fixed:
An insert or update with combined range and equality constraints
failed when run against an NDB
table with the error Got unknown error from
NDB. An example of such a statement would be
UPDATE t1 SET b = 5 WHERE a IN (7,8) OR a >=
10;
.
(Bug#31874)
An error with an if
statement in
sql/ha_ndbcluster.cc
could potentially lead
to an infinite loop in case of failure when working with
AUTO_INCREMENT
columns in
NDB
tables.
(Bug#31810)
The NDB
storage engine code was not
safe for strict-alias optimization in gcc
4.2.1.
(Bug#31761)
ndb_restore displayed incorrect backup file version information. This meant (for example) that, when attempting to restore a backup made from a MySQL 5.1.22 cluster to a MySQL Cluster NDB 6.3.3 cluster, the restore process failed with the error Restore program older than backup version. Not supported. Use new restore program. (Bug#31723)
Following an upgrade, ndb_mgmd would fail with an ArbitrationError. (Bug#31690)
The NDB
management client command
provided no output when
node_id
REPORT
MEMORYnode_id
was the node ID of a
management or API node. Now, when this occurs, the management
client responds with Node
.
(Bug#29485)node_id
: is not a data
node
Performing DELETE
operations
after a data node had been shut down could lead to inconsistent
data following a restart of the node.
(Bug#26450)
UPDATE IGNORE
could sometimes fail on
NDB
tables due to the use of
unitialized data when checking for duplicate keys to be ignored.
(Bug#25817)
Changes in MySQL Cluster NDB 6.3.5 (5.1.22-ndb-6.3.5)
Bugs fixed:
Changes in MySQL Cluster NDB 6.3.4 (5.1.22-ndb-6.3.4)
Functionality added or changed:
Incompatible Change:
The --ndb_optimized_node_selection
startup
option for mysqld now allows a wider range of
values and corresponding behaviors for SQL nodes when selecting
a transaction coordinator.
You should be aware that the default value and behavior as well
as the value type used for this option have changed, and that
you may need to update the setting used for this option in your
my.cnf
file prior to upgrading
mysqld. See
Section 5.1.4, “Server System Variables”, for more information.
Incompatible Change:
The --ndb_optimized_node_selection
startup
option for mysqld now allows a wider range of
values and corresponding behaviors for SQL nodes when selecting
a transaction coordinator.
You should be aware that the default value and behavior as well
as the value type used for this option have changed, and that
you may need to update the setting used for this option in your
my.cnf
file prior to upgrading
mysqld. See
Section 5.1.4, “Server System Variables”, for more information.
Bugs fixed:
It was possible in some cases for a node group to be “lost” due to missed local checkpoints following a system restart. (Bug#31525)
NDB
tables having names containing
nonalphanumeric characters (such as
“$
”) were not discovered
correctly.
(Bug#31470)
A node failure during a local checkpoint could lead to a subsequent failure of the cluster during a system restart. (Bug#31257)
A cluster restart could sometimes fail due to an issue with table IDs. (Bug#30975)
Transaction timeouts were not handled well in some circumstances, leading to excessive number of transactions being aborted unnecessarily. (Bug#30379)
In some cases, the cluster managment server logged entries multiple times following a restart of mgmd. (Bug#29565)
ndb_mgm --help
did not
display any information about the -a
option.
(Bug#29509)
An interpreted program of sufficient size and complexity could cause all cluster data nodes to shut down due to buffer overruns. (Bug#29390)
The cluster log was formatted inconsistently and contained extraneous newline characters. (Bug#25064)
Changes in MySQL Cluster NDB 6.3.3 (5.1.22-ndb-6.3.3)
Functionality added or changed:
Mapping of NDB
error codes to MySQL
storage engine error codes has been improved.
(Bug#28423)
Bugs fixed:
Partitioning:
EXPLAIN
PARTITIONS
reported partition usage by queries on
NDB
tables according to the
standard MySQL hash function than the hash function used in the
NDB
storage engine.
(Bug#29550)
Attempting to restore a backup made on a cluster host using one endian to a machine using the other endian could cause the cluster to fail. (Bug#29674)
The description of the --print
option provided
in the output from ndb_restore --help
was incorrect.
(Bug#27683)
Restoring a backup made on a cluster host using one endian to a
machine using the other endian failed for
BLOB
and
DATETIME
columns.
(Bug#27543, Bug#30024)
Changes in MySQL Cluster NDB 6.3.2 (5.1.22-ndb-6.3.2)
Functionality added or changed:
Online ADD COLUMN
, ADD
INDEX
, and DROP INDEX
operations can now be performed explicitly for
NDB
tables, as well as online
renaming of tables and columns for
NDB
and MyISAM
tables — that is, without copying or locking of the
affected tables — using ALTER ONLINE
TABLE
.
Indexes can also be created and dropped online using
CREATE INDEX
and
DROP INDEX
, respectively, using
the ONLINE
keyword.
You can force operations that would otherwise be performed
online to be done offline using the OFFLINE
keyword.
See Section 12.1.7, “ALTER TABLE
Syntax”,
Section 12.1.13, “CREATE INDEX
Syntax”, and
Section 12.1.24, “DROP INDEX
Syntax”, for more information.
It is now possible to control whether fixed-width or
variable-width storage is used for a given column of an
NDB
table by means of the
COLUMN_FORMAT
specifier as part of the
column's definition in a CREATE
TABLE
or ALTER TABLE
statement.
It is also possible to control whether a given column of an
NDB
table is stored in memory or on
disk, using the STORAGE
specifier as part of
the column's definition in a CREATE
TABLE
or ALTER TABLE
statement.
For permitted values and other information about
COLUMN_FORMAT
and STORAGE
,
see Section 12.1.17, “CREATE TABLE
Syntax”.
A new cluster management server startup option
--bind-address
makes it possible
to restrict management client connections to
ndb_mgmd to a single host and port. For more
information, see
Section 17.4.4, “ndb_mgmd — The MySQL Cluster Management Server Daemon”.
Bugs fixed:
When an NDB
event was left behind
but the corresponding table was later recreated and received a
new table ID, the event could not be dropped.
(Bug#30877)
When creating an NDB
table with a column that
has COLUMN_FORMAT = DYNAMIC
, but the table
tiself uses ROW_FORMAT=FIXED
, the table is
considered dynamic, but any columns for which the row format is
unspecified default to FIXED
. Now in such
cases the server issues the warning Row format FIXED
incompatible with dynamic attribute
column_name
.
(Bug#30276)
An insufficiently descriptive and potentially misleading Error 4006 (Connect failure - out of connection objects...) was produced when either of the following two conditions occurred:
There were no more transaction records in the transaction coordinator
An NDB
object in the NDB API
was initialized with insufficient parallelism
Separate error messages are now generated for each of these two cases. (Bug#11313)
Changes in MySQL Cluster NDB 6.3.0 (5.1.19-ndb-6.3.0)
Functionality added or changed:
Reporting functionality has been significantly enhanced in this release:
A new configuration parameter
BackupReportFrequency
now makes it
possible to cause the management client to provide status
reports at regular intervals as well as for such reports
to be written to the cluster log (depending on cluster
event logging levels). See
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”, for more
information about this parameter.
A new REPORT
command has been added in
the cluster management client. REPORT
BackupStatus
allows you to obtain a backup
status report at any time during a backup. REPORT
MemoryUsage
reports the current data memory and
index memory used by each data node. For more about the
REPORT
command, see
Section 17.5.2, “Commands in the MySQL Cluster Management Client”.
ndb_restore now provides running reports of its progress when restoring a backup. In addition, a complete report status report on the backup is written to the cluster log.
A new configuration parameter ODirect
causes
NDB
to attempt using
O_DIRECT
writes for LCP, backups, and redo
logs, often lowering CPU usage.
User Comments
Add your own comment.