This section contains unified change history highlights for all
MySQL Cluster releases based on version 7.0 of the
NDBCLUSTER
storage engine through
MySQL Cluster NDB 7.0.15. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
Cluster API.
Early MySQL Cluster NDB 7.0 releases tagged “NDB 6.4.x” are also included in this listing.
For an overview of features that were added in MySQL Cluster NDB 7.0, see Section 17.1.4.5, “MySQL Cluster Development in MySQL Cluster NDB 7.0”.
Changes in MySQL Cluster NDB 7.0.10 (5.1.39-ndb-7.0.10)
Functionality added or changed:
Added the ndb_mgmd
--nowait-nodes
option, which
allows a cluster that is configured to use multiple management
servers to be started using fewer than the number configured.
This is most likely to be useful when a cluster is configured
with two management servers and you wish to start the cluster
using only one of them.
See Section 17.4.4, “ndb_mgmd — The MySQL Cluster Management Server Daemon”, for more information. (Bug#48669)
This enhanced functionality is supported for upgrades from MySQL
Cluster NDB 6.3 when the NDB
engine
version is 6.3.29 or later.
(Bug#48528, Bug#49163)
The output from ndb_config
--configinfo
--xml
now indicates, for each
configuration parameter, the following restart type information:
Whether a system restart or a node restart is required when resetting that parameter;
Whether cluster nodes need to be restarted using the
--initial
option when resetting the
parameter.
Bugs fixed:
Node takeover during a system restart occurs when the REDO log for one or more data nodes is out of date, so that a node restart is invoked for that node or those nodes. If this happens while a mysqld process is attached to the cluster as an SQL node, the mysqld takes a global schema lock (a row lock), while trying to set up cluster-internal replication.
However, this setup process could fail, causing the global schema lock to be held for an excessive length of time, which made the node restart hang as well. As a result, the mysqld failed to set up cluster-internal replication, which led to tables being read-only, and caused one node to hang during the restart.
This issue could actually occur in MySQL Cluster NDB 7.0 only, but the fix was also applied MySQL Cluster NDB 6.3, in order to keep the two codebases in alignment.
Sending SIGHUP
to a mysqld
running with the --ndbcluster
and
--log-bin
options caused the
process to crash instead of refreshing its log files.
(Bug#49515)
If the master data node receiving a request from a newly-started API or data node for a node ID died before the request has been handled, the management server waited (and kept a mutex) until all handling of this node failure was complete before responding to any other connections, instead of responding to other connections as soon as it was informed of the node failure (that is, it waited until it had received a NF_COMPLETEREP signal rather than a NODE_FAILREP signal). On visible effect of this misbehavior was that it caused management client commands such as SHOW and ALL STATUS to respond with unnecessary slowness in such circumstances. (Bug#49207)
Attempting to create more than 11435 tables failed with Error 306 (Out of fragment records in DIH). (Bug#49156)
When evaluating the options
--include-databases
,
--include-tables
,
--exclude-databases
, and
--exclude-tables
, the
ndb_restore program overwrote the result of
the database-level options with the result of the table-level
options rather than merging these results together, sometimes
leading to unexpected and unpredictable results.
As part of the fix for this problem, the semantics of these options have been clarified; because of this, the rules governing their evaluation have changed slightly. These changes be summed up as follows:
All --include-*
and
--exclude-*
options are now evaluated from
right to left in the order in which they are passed to
ndb_restore.
All --include-*
and
--exclude-*
options are now cumulative.
In the event of a conflict, the first (rightmost) option takes precedence.
For more detailed information and examples, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”. (Bug#48907)
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug#48861)
Exhaustion of send buffer memory or long signal memory caused data nodes to crash. Now an appropriate error message is provided instead when this situation occurs. (Bug#48852)
In some situations, when it was not possible for an SQL node to
start a schema transaction (necessary, for instance, as part of
an online ALTER TABLE
),
NDBCLUSTER
did not correctly
indicate the error to the MySQL server, which led
mysqld to crash.
(Bug#48841)
Under certain conditions, accounting of the number of free scan records in the local query handler could be incorrect, so that during node recovery or a local checkpoint operations, the LQH could find itself lacking a scan record that is expected to find, causing the node to crash. (Bug#48697)
See also Bug#48564.
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug#48604)
During an LCP master takeover, when the newly elected master did
not receive a COPY_GCI
LCP protocol message
but other nodes participating in the local checkpoint had
received one, the new master could use an uninitialized
variable, which caused it to crash.
(Bug#48584)
When running many parallel scans, a local checkpoint (which performs a scan internally) could find itself not getting a scan record, which led to a data node crash. Now an extra scan record is reserved for this purpose, and a problem with obtaining the scan record returns an appropriate error (error code 489, Too many active scans). (Bug#48564)
During a node restart, logging was enabled on a per-fragment
basis as the copying of each fragment was completed but local
checkpoints were not enabled until all fragments were copied,
making it possible to run out of redo log file space
(NDB
error code 410) before the
restart was complete. Now logging is enabled only after all
fragments has been copied, just prior to enabling local
checkpoints.
(Bug#48474)
When using very large transactions containing many inserts,
ndbmtd could fail with Signal
11 without an easily detectable reason, due to an
internal variable being unitialized in the event that the
LongMessageBuffer
was overloaded. Now, the
variable is initialized in such cases, avoiding the crash, and
an appropriate error message is generated.
(Bug#48441)
See also Bug#46914.
A data node crashing while restarting, followed by a system restart could lead to incorrect handling of redo log metadata, causing the system restart to fail with Error while reading REDO log. (Bug#48436)
Starting a mysqld process with
--ndb-nodeid
(either as
a command-line option or by assigning it a value in
my.cnf
) caused the
mysqld to get only the corresponding
connection from the [mysqld]
section in the
config.ini
file having the matching ID,
even when connection pooling was enabled (that is, when the
mysqld process was started with
--ndb-cluster-connection-pool
set
greater than 1).
(Bug#48405)
The configuration check that each management server runs to verify that all connected ndb_mgmd processes have the same configuration could fail when a configuration change took place while this check was in progress. Now in such cases, the configuration check is rescheduled for a later time, after the change is complete. (Bug#48143)
When employing NDB
native backup to
back up and restore an empty NDB
table that used a non-sequential
AUTO_INCREMENT
value, the
AUTO_INCREMENT
value was not restored
correctly.
(Bug#48005)
ndb_config --xml
--configinfo
now indicates that
parameters belonging in the [SCI]
,
[SCI DEFAULT]
, [SHM]
, and
[SHM DEFAULT]
sections of the
config.ini
file are deprecated or
experimental, as appropriate.
(Bug#47365)
NDB
stores blob column data in a
separate, hidden table that is not accessible from MySQL. If
this table was missing for some reason (such as accidental
deletion of the file corresponding to the hidden table) when
making a MySQL Cluster native backup, ndb_restore crashed when
attempting to restore the backup. Now in such cases, ndb_restore
fails with the error message Table
table_name
has blob column
(column_name
) with missing parts
table in backup instead.
(Bug#47289)
In MySQL Cluster NDB 7.0, ndb_config and
ndb_error_reporter were printing warnings
about management and data nodes running on the same host to
stdout
instead of stderr
,
as was the case in earlier MySQL Cluster release series.
(Bug#44689, Bug#49160)
See also Bug#25941.
DROP DATABASE
failed when there
were stale temporary NDB
tables in
the database. This situation could occur if
mysqld crashed during execution of a
DROP TABLE
statement after the
table definition had been removed from
NDBCLUSTER
but before the
corresponding .ndb
file had been removed
from the crashed SQL node's data directory. Now, when
mysqld executes DROP
DATABASE
, it checks for these files and removes them
if there are no corresponding table definitions for them found
in NDBCLUSTER
.
(Bug#44529)
Creating an NDB
table with an
excessive number of large BIT
columns caused the cluster to fail. Now, an attempt to create
such a table is rejected with error 791 (Too many
total bits in bitfields).
(Bug#42046)
See also Bug#42047.
When a long-running transaction lasting long enough to cause
Error 410 (REDO log files overloaded) was
later committed or rolled back, it could happen that
NDBCLUSTER
was not able to release
the space used for the REDO log, so that the error condition
persisted indefinitely.
The most likely cause of such transactions is a bug in the application using MySQL Cluster. This fix should handle most cases where this might occur. (Bug#36500)
Deprecation and usage information obtained from
ndb_config --configinfo
regarding the PortNumber
and
ServerPort
configuration parameters was
improved.
(Bug#24584)
Disk Data: When running a write-intensive workload with a very large disk page buffer cache, CPU usage approached 100% during a local checkpoint of a cluster containing Disk Data tables. (Bug#49532)
Disk Data:
NDBCLUSTER
failed to provide a
valid error message it failed to commit schema transactions
during an initial start if the cluster was configured using the
InitialLogFileGroup
parameter.
(Bug#48517)
Disk Data: In certain limited cases, it was possible when the cluster contained Disk Data tables for ndbmtd to crash during a system restart. (Bug#48498)
See also Bug#47832.
Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug#45794, Bug#48910)
Disk Data:
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout
(that is, in one or more
ndb_
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
(Bug#42431)nodeid
_out.log
Disk Data:
When the FileSystemPathUndoFiles
configuration parameter was set to an non-existent path, the
data nodes shut down with the generic error code 2341
(Internal program error). Now in such
cases, the error reported is error 2815 (File not
found).
Cluster API:
When a DML operation failed due to a uniqueness violation on an
NDB
table having more than one
unique index, it was difficult to determine which constraint
caused the failure; it was necessary to obtain an
NdbError
object, then decode its
details
property, which in could lead to
memory management issues in application code.
To help solve this problem, a new API method
Ndb::getNdbErrorDetail()
is added, providing
a well-formatted string containing more precise information
about the index that caused the unque constraint violation. The
following additional changes are also made in the NDB API:
Use of NdbError.details
is now deprecated
in favor of the new method.
The NdbDictionary::listObjects()
method
has been modified to provide more information.
For more information, see
Ndb::getNdbErrorDetail()
,
The NdbError
Structure, and
Dictionary::listObjects()
.
(Bug#48851)
Cluster API:
When using blobs, calling getBlobHandle()
requires the full key to have been set using
equal()
, because
getBlobHandle()
must access the key for
adding blob table operations. However, if
getBlobHandle()
was called without first
setting all parts of the primary key, the application using it
crashed. Now, an appropriate error code is returned instead.
(Bug#28116, Bug#48973)
Changes in MySQL Cluster NDB 7.0.9a (5.1.39-ndb-7.0.9a)
Bugs fixed:
When the combined length of all names of tables using the
NDB
storage engine was greater than
or equal to 1024 bytes, issuing the START
BACKUP
command in the ndb_mgm
client caused the cluster to crash.
(Bug#48531)
Changes in MySQL Cluster NDB 7.0.8a (5.1.37-ndb-7.0.8a)
Bugs fixed:
Changes in MySQL Cluster NDB 7.0.7 (5.1.35-ndb-7.0.7)
Functionality added or changed:
Important Change:
The default value of the DiskIOThreadPool
data node configuration parameter has changed from 8 to 2.
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use CLOCK_REALTIME
as the default clock.
(Bug#46183)
Formerly, node IDs were represented in the cluster log using a complex hexadecimal/binary encoding scheme. Now, node IDs are reported in the cluster log using numbers in standard decimal notation. (Bug#44248)
A new option --exclude-missing-columns
has been
added for the ndb_restore program. In the
event that any tables in the database or databases being
restored to have fewer columns than the same-named tables in the
backup, the extra columns in the backup's version of the
tables are ignored. For more information, see
Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”.
(Bug#43139)
This issue, originally resolved in MySQL 5.1.16, re-occurred due to a later (unrelated) change. The fix has been re-applied.
Previously, it was possible to disable arbitration only by
setting ArbitrationRank
to 0 on all
management and API nodes. A new data node configuration
parameter Arbitration
simplifies this task;
to disable arbitration, you can now use Arbitration =
Disabled
in the [ndbd default]
section of the config.ini
file.
It is now also possible to configure arbitration in such a way
that the cluster waits until the time determined by
ArbitrationTimeout
passes for an external
manager to perform arbitration instead of handling it
internally. This can be done by setting Arbitration =
WaitExternal
in the [ndbd default]
section of the config.ini
file.
The default value for the Arbitration parameter is
Default
, which allows arbitration to proceed
normally, as determined by the
ArbitrationRank
settings for the management
and API nodes.
For more information, see Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”.
Bugs fixed:
Packaging:
The pkg
installer for MySQL Cluster on
Solaris did not perform a complete installation due to an
invalid directory reference in the post-install script.
(Bug#41998)
The output from ndb_config
--configinfo
--xml
contained quote characters ("
) within quoted
XML attributes, causing it to be not well-formed.
(Bug#46891)
When using multi-threaded data node processes
(ndbmtd), it was possible for LQH threads to
continue running even after all NDB
tables had been dropped. This meant that dropping the last
remaining NDB
table during a local
checkpoint could cause multi-threaded data nodes to fail.
(Bug#46890)
During a global checkpoint, LQH threads could run unevenly, causing a circular buffer oveflow by the Subscription Manager, which led to data node failure. (Bug#46782)
Restarting the cluster following a local checkpoint and an
online ALTER TABLE
on a non-empty
table caused data nodes to crash.
(Bug#46651)
A combination of index creation and drop operations (or creating and dropping tables having indexes) with node and system restarts could lead to a crash. (Bug#46552)
Following an upgrade from MySQL Cluster NDB 6.3.x to MySQL Cluster NDB 7.0.6, DDL and backup operations failed. (Bug#46494, Bug#46563)
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE
), divided by the number
of replicas. Thus, when NoOfReplicas = 1
at
least 3 data nodes were required to trigger this issue, and
when NoOfReplicas = 2
at least 4 data nodes
were required to do so.
Killing MySQL Cluster nodes immediately following a local checkpoint could lead to a crash of the cluster when later attempting to perform a system restart.
The exact sequence of events causing this issue was as follows:
Local checkpoint occurs.
Immediately following the LCP, kill the master data node.
Kill the remaining data nodes within a few seconds of killing the master.
Attempt to restart the cluster.
Creating an index when the cluster had run out of table records could cause data nodes to crash. (Bug#46295)
Ending a line in the config.ini
file with
an extra semicolon character (;
) caused
reading the file to fail with a parsing error.
(Bug#46242)
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug#46069)
OPTIMIZE TABLE
on an
NDB
table could in some cases cause
SQL and data nodes to crash. This issue was observed with both
ndbd and ndbmtd.
(Bug#45971)
The AutoReconnect
configuration parameter for
API nodes (including SQL nodes) has been added. This is intended
to prevent API nodes from re-using allocated node IDs during
cluster restarts. For more information, see
Section 17.3.2.7, “Defining SQL and Other API Nodes in a MySQL Cluster”.
This fix also introduces two new methods of the
Ndb_cluster_connection
class in the NDB API.
For more information, see
Ndb_cluster_connection::set_auto_reconnect()
,
and
Ndb_cluster_connection::get_auto_reconnect()
.
(Bug#45921)
DML statements run during an upgrade from MySQL Cluster NDB 6.3 to NDB 7.0 were not handled correctly. (Bug#45917)
On Windows, the internal
basestring_vsprintf()
function did not return
a POSIX-compliant value as expected, causing the management
server to crash when trying to start a MySQL Cluster with more
than 4 data nodes.
(Bug#45733)
The signals used by ndb_restore to send progress information about backups to the cluster log accessed the cluster transporter without using any locks. Because of this, it was theoretically possible that these signals could be interefered with by heartbeat signals if both were sent at the same time, causing the ndb_restore messages to be corrupted. (Bug#45646)
Due to changes in the way that
NDBCLUSTER
handles schema changes
(implementation of schema transactions) in MySQL Cluster NDB
7.0, it was not possible to create MySQL Cluster tables having
more than 16 indexes using a single CREATE
TABLE
statement.
This issue occurs only in MySQL Cluster NDB 7.0 releases prior to 7.0.7 (including releases numbered NDB 6.4.x).
If you are not yet able to upgrade from an earlier MySQL Cluster
NDB 7.0 release, you can work around this problem by creating
the table without any indexes, then adding the indexes using a
separate CREATE INDEX
statement
for each index.
(Bug#45525)
storage/ndb/src/common/util/CMakeLists.txt
did not build the BaseString-t test program
for Windows as the equivalent
storage/ndb/src/common/util/Makefile.am
does when building MySQL Cluster on Unix platforms.
(Bug#45099)
Problems could arise when using
VARCHAR
columns
whose size was greater than 341 characters and which used the
utf8_unicode_ci
collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE
statements to
crash mysqld.
(Bug#45053)
The warning message Possible bug in Dbdih::execBLOCK_COMMIT_ORD ... could sometimes appear in the cluster log. This warning is obsolete, and has been removed. (Bug#44563)
Debugging code causing ndbd to use file compression on NTFS filesystems failed with an error. (The code was removed.) This issue affected debug builds of MySQL Cluster on Windows platforms only. (Bug#44418)
ALTER TABLE
REORGANIZE PARTITION
could fail with Error 741
(Unsupported alter table) if the
appropriate hash-map was not present. This could occur when
adding nodes online; for example, when going from 2 data nodes
to 3 data nodes with NoOfReplicas=1
, or from
4 data nodes to 6 data nodes with
NoOfReplicas=2
.
(Bug#44301)
Previously, a GCP STOP
event was written to
the cluster log as an INFO
event. Now it is
logged as a WARNING
event instead.
(Bug#43853)
In some cases, OPTIMIZE TABLE
on
an NDB
table did not free any
DataMemory
.
(Bug#43683)
If the cluster crashed during the execution of a
CREATE LOGFILE GROUP
statement,
the cluster could not be restarted afterwards.
(Bug#36702)
See also Bug#34102.
Disk Data: Partitioning:
An NDBCLUSTER
table created with a
very large value for the MAX_ROWS
option
could — if this table was dropped and a new table with
fewer partitions, but having the same table ID, was created
— cause ndbd to crash when performing a
system restart. This was because the server attempted to examine
each partition whether or not it actually existed.
(Bug#45154)
Disk Data:
If the value set in the config.ini
file for
FileSystemPathDD
,
FileSystemPathDataFiles
, or
FileSystemPathUndoFiles
was identical to the
value set for FileSystemPath
, that parameter
was ignored when starting the data node with
--initial
option. As a result, the Disk Data
files in the corresponding directory were not removed when
performing an initial start of the affected data node or data
nodes.
(Bug#46243)
Changes in MySQL Cluster NDB 7.0.6 (5.1.34-ndb-7.0.6)
Functionality added or changed:
The ndb_config utility program can now
provide an offline dump of all MySQL Cluster configuration
parameters including information such as default and permitted
values, brief description, and applicable section of the
config.ini
file. A dump in text format is
produced when running ndb_config with the new
--configinfo
option, and in XML format when the
options --configinfo --xml
are used together.
For more information and examples, see
Section 17.4.6, “ndb_config — Extract MySQL Cluster Configuration Information”.
Bugs fixed:
Important Change: Partitioning:
User-defined partitioning of an
NDBCLUSTER
table without any
primary key sometimes failed, and could cause
mysqld to crash.
Now, if you wish to create an
NDBCLUSTER
table with user-defined
partitioning, the table must have an explicit primary key, and
all columns listed in the partitioning expression must be part
of the primary key. The hidden primary key used by the
NDBCLUSTER
storage engine is not
sufficient for this purpose. However, if the list of columns is
empty (that is, the table is defined using PARTITION BY
[LINEAR] KEY()
), then no explicit primary key is
required.
This change does not effect the partitioning of tables using any
storage engine other than
NDBCLUSTER
.
(Bug#40709)
Important Change:
Previously, the configuration parameter
NoOfReplicas
had no default value. Now the
default for NoOfReplicas
is 2, which is the
recommended value in most settings.
(Bug#44746)
Important Note: It was not possible to perform an online upgrade from any MySQL Cluster NDB 6.x release to MySQL Cluster NDB 7.0.5 or any to earlier MySQL Cluster NDB 7.0 release.
With this fix, it is possible in MySQL Cluster NDB 7.0.6 and later to perform online upgrades from MySQL Cluster NDB 6.3.8 and later MySQL Cluster NDB 6.3 releases, or from MySQL Cluster NDB 7.0.5 or later MySQL Cluster NDB 7.0 releases. Online upgrades to MySQL Cluster NDB 7.0 releases previous to MySQL Cluster NDB 7.0.6 from earlier MySQL Cluster releases remain unsupported; online upgrades from MySQL Cluster NDB 7.0 releases previous to MySQL Cluster NDB 7.0.5 (including NDB 6.4.x beta releases) to later MySQL Cluster NDB 7.0 releases also remain unsupported. (Bug#44294)
An internal NDB API buffer was not properly initialized. (Bug#44977)
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with filesystem errors. (Bug#44952)
When restarting a data nodes, management and API nodes reconnecting to it failed to re-use existing ports that had already been dynamically allocated for communications with that data node. (Bug#44866)
When ndb_config could not find the file
referenced by the --config-file
option, it
tried to read my.cnf
instead, then failed
with a misleading error message.
(Bug#44846)
When a data node was down so long that its most recent local checkpoint depended on a global checkpoint that was no longer restorable, it was possible for it to be unable to use optimized node recovery when being restarted later. (Bug#44844)
See also Bug#26913.
Online upgrades to MySQL Cluster NDB 7.0 from a MySQL Cluster NDB 6.3 release could fail due to changes in the handling of key lengths and unique indexes during node recovery. (Bug#44827)
ndb_config
--xml
did not output any entries for the HostName
parameter. In addition, the default listed for
MaxNoOfFiles
was outside the allowed range of
values.
(Bug#44749)
The output of ndb_config
--xml
did not provide information about all sections of the
configuration file.
(Bug#44685)
Use of __builtin_expect()
had the side effect
that compiler warnings about misuse of =
(assignment) instead of ==
in comparisons
were lost when building in debug mode. This is no longer
employed when configuring the build with the
--with-debug
option.
(Bug#44570)
See also Bug#44567.
Inspection of the code revealed that several assignment
operators (=
) were used in place of
comparison operators (==
) in
DbdihMain.cpp
.
(Bug#44567)
See also Bug#44570.
When using large numbers of configuration parameters, the management server took an excessive amount of time (several minutes or more) to load these from the configuration cache when starting. This problem occurred when there were more than 32 configuration parameters specified, and became progressively worse with each additional multiple of 32 configuration parameters. (Bug#44488)
Building the MySQL Cluster NDB 7.0 tree failed when using the icc compiler. (Bug#44310)
SSL connections to SQL nodes failed on big-endian platforms. (Bug#44295)
Signals providing node state information
(NODE_STATE_REP
and
CHANGE_NODE_STATE_REQ
) were not propagated to
all blocks of ndbmtd. This could cause the
following problems:
Inconsistent redo logs when performing a graceful shutdown;
Data nodes crashing when later restarting the cluster, data nodes needing to perform node recovery during the system restart, or both.
See also Bug#42564.
An NDB internal timing function did not work correctly on Windows and could cause mysqld to fail on some AMD processors, or when running inside a virtual machine. (Bug#44276)
It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug#44132)
ndb_restore failed when trying to restore data on a big-endian machine from a backup file created on a little-endian machine. (Bug#44069)
Repeated starting and stopping of data nodes could cause ndb_mgmd to fail. This issue was observed on Solaris/SPARC. (Bug#43974)
A number of incorrectly formatted output strings in the source code caused compiler warnings. (Bug#43878)
When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug#43699)
In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug#43156)
NDBCLUSTER
did not build correctly
on Solaris 9 platforms.
(Bug#39080)
The output of ndbd --help
did not provide clear information about the program's
--initial
and --initial-start
options.
(Bug#28905)
It was theoretically possible for the value of a nonexistent
column to be read as NULL
, rather than
causing an error.
(Bug#27843)
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery.
This issue was somewhat more likely to be encountered when using ndbmtd. (Bug#41915)
See also Bug#47832.
Disk Data: This fix supercedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug#24521)
Changes in MySQL Cluster NDB 7.0.5 (5.1.32-ndb-7.0.5)
Functionality added or changed:
Two new server status variables
Ndb_scan_count
and
Ndb_pruned_scan_count
have
been introduced.
Ndb_scan_count
gives the
number of scans executed since the cluster was last started.
Ndb_pruned_scan_count
gives
the number of scans for which
NDBCLUSTER
was able to use
partition pruning. Together, these variables can be used to help
determine in the MySQL server whether table scans are pruned by
NDBCLUSTER
.
(Bug#44153)
Bugs fixed:
Important Note: Due to problem discovered after the code freeze for this release, it is not possible to perform an online upgrade from any MySQL Cluster NDB 6.x release to MySQL Cluster NDB 7.0.5 or any earlier MySQL Cluster NDB 7.0 release.
This issue is fixed in MySQL Cluster NDB 7.0.6 and later for upgrades from MySQL Cluster NDB 6.3.8 and later MySQL Cluster NDB 6.3 releases, or from MySQL Cluster NDB 7.0.5. (Bug#44294)
Cluster Replication: If data node failed during an event creation operation, there was a slight risk that a surviving data node could send an invalid table reference back to NDB, causing the operation to fail with a false Error 723 (No such table). This could take place when a data node failed as a mysqld process was setting up MySQL Cluster Replication. (Bug#43754)
Cluster API: The following issues occurred when performing an online (rolling) upgrade of a cluster to a version of MySQL Cluster that supports configuration caching from a version that does not:
When using multiple management servers, after upgrading and restarting one ndb_mgmd, any remaining management servers using the previous version of ndb_mgmd could not synchronize their configuration data.
The MGM API ndb_mgm_get_configuration()
function failed to obtain configuration data.
Cluster API: The following issues occurred when performing an online (rolling) upgrade of a cluster to a version of MySQL Cluster that supports configuration caching from a version that does not:
When using multiple management servers, after upgrading and restarting one ndb_mgmd, any remaining management servers using the previous version of ndb_mgmd could not synchronize their configuration data.
The MGM API ndb_mgm_get_configuration()
function failed to obtain configuration data.
If the number of fragments per table rises above a certain
threshold, the DBDIH
kernel block's
on-disk table-definition grows large enough to occupy 2 pages.
However, in MySQL Cluster NDB 7.0 (including MySQL Cluster NDB
6.4 releases), only 1 page was actually written, causing table
definitions stored on disk to be incomplete.
This issue was not observed in MySQL Cluster release series prior to MySQL Cluster NDB 7.0. (Bug#44135)
TransactionDeadlockDetectionTimeout
values
less than 100 were treated as 100. This could cause scans to
time out unexpectedly.
(Bug#44099)
The file ndberror.c
contained a C++-style
comment, which caused builds to fail with some C compilers.
(Bug#44036)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug#43888)
The setting for
ndb_use_transactions
was
ignored. This issue was only known to occur in MySQL Cluster NDB
6.4.3 and MySQL Cluster NDB 7.0.4.
(Bug#43236)
When a data node process had been killed after allocating a node ID, but before making contact with any other data node processes, it was not possible to restart it due to a node ID allocation failure.
This issue could effect either ndbd or ndbmtd processes. (Bug#43224)
This regression was introduced by Bug#42973.
ndb_restore crashed when trying to restore a backup made to a MySQL Cluster running on a platform having different endianness from that on which the original backup was taken. (Bug#39540)
PID files for the data and management node daemons were not removed following a normal shutdown. (Bug#37225)
ndb_restore --print_data
did
not handle DECIMAL
columns
correctly.
(Bug#37171)
Invoking the management client START BACKUP
command from the system shell (for example, as ndb_mgm
-e "START BACKUP") did not work correctly, unless the
backup ID was included when the command was invoked.
Now, the backup ID is no longer required in such cases, and the
backup ID that is automatically generated is printed to stdout,
similar to how this is done when invoking START
BACKUP
within the management client.
(Bug#31754)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Disk Data:
When using multi-threaded data nodes, DROP
TABLE
statements on Disk Data tables could hang.
(Bug#43825)
Disk Data: This fix completes one that was made for this issue in MySQL Cluster NDB-7.0.4, which did not rectify the problem in all cases. (Bug#43632)
Cluster API:
If the largest offset of a
RecordSpecification
used for an
NdbRecord
object was for the NULL
bits (and thus not a
column), this offset was not taken into account when calculating
the size used for the RecordSpecification
.
This meant that the space for the NULL
bits
could be overwritten by key or other information.
(Bug#43891)
Cluster API:
BIT
columns created using the
native NDB API format that were not created as nullable could
still sometimes be overwritten, or cause other columns to be
overwritten.
This issue did not effect tables having
BIT
columns created using the
mysqld format (always used by MySQL Cluster SQL nodes).
(Bug#43802)
Changes in MySQL Cluster NDB 7.0.4 (5.1.32-ndb-7.0.4)
Functionality added or changed:
Important Change:
The default values for a number of MySQL Cluster configuration
parameters relating to memory usage and buffering have changed.
These parameters include RedoBuffer
,
LongMessageBuffer
,
BackupMemory
,
BackupDataBufferSize
,
BackupLogBufferSize
,
BackupWriteSize
,
BackupMaxWriteSize
,
SendBufferMemory
(when applied to TCP
transporters), and ReceiveBufferMemory
.
For more information, see Section 17.3, “MySQL Cluster Configuration”.
When restoring from backup, ndb_restore now reports the last global checkpoint reached when the backup was taken. (Bug#37384)
Bugs fixed:
Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.
As part of the fix for this issue, several improvements have
been made in the NDB API, including the addition of a new
NdbScanOperation::getPruned()
method, a new
variant of NdbIndexScanOperation::setBound()
,
and a new Ndb::PartitionSpec
data structure.
For more information about these changes, see
NdbScanOperation::getPruned()
,
NdbIndexScanOperation::setBound
, and
The PartitionSpec
Structure.
(Bug#37934)
Cluster API: Partition pruning did not work correctly for queries involving multiple range scans.
As part of the fix for this issue, several improvements have
been made in the NDB API, including the addition of a new
NdbScanOperation::getPruned()
method, a new
variant of NdbIndexScanOperation::setBound()
,
and a new Ndb::PartitionSpec
data structure.
For more information about these changes, see
NdbScanOperation::getPruned()
,
NdbIndexScanOperation::setBound
, and
The PartitionSpec
Structure.
(Bug#37934)
TimeBetweenLocalCheckpoints
was measured from
the end of one local checkpoint to the beginning of the next,
rather than from the beginning of one LCP to the beginning of
the next. This meant that the time spent performing the LCP was
not taken into account when determining the
TimeBetweenLocalCheckpoints
interval, so that
LCPs were not started often enough, possibly causing data nodes
to run out of redo log space prematurely.
(Bug#43567)
The management server failed to start correctly in daemon mode. (Bug#43559)
Following a DROP NODEGROUP
command, the
output of SHOW
in the
ndb_mgm cliently was not updated to reflect
the fact that the data nodes affected by this command were no
longer part of a node group.
(Bug#43413)
Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug#43226)
When using ndbmtd, multiple data node failures caused the remaining data nodes to fail as well. (Bug#43109)
It was not possible to add new data nodes to the cluster online using multi-threaded data node processes (ndbmtd). (Bug#43108)
Some queries using combinations of logical and comparison
operators on an indexed column in the WHERE
clause could fail with the error Got error 4541
'IndexBound has no bound information' from
NDBCLUSTER.
(Bug#42857)
Disk Data: When using multi-threaded data nodes, dropping a Disk Data table followed by a data node restart led to a crash. (Bug#43632)
Disk Data: When using ndbmtd, repeated high-volume inserts (on the order of 10000 rows inserted at a time) on a Disk Data table would eventually lead to a data node crash. (Bug#41398)
Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.
As a result of this fix, the minimum allowed
INTIAL_SIZE
for an undo log file is now
1M
(1 megabyte).
(Bug#29574)
Cluster API:
The default NdbRecord
structures created by
NdbDictionary
could have overlapping null
bits and data fields.
(Bug#43590)
Cluster API:
When performing insert or write operations,
NdbRecord
allows key columns to be specified
in both the key record and in the attribute record. Only one key
column value for each key column should be sent to the NDB
kernel, but this was not guaranteed. This is now ensured as
follows: For insert and write operations, key column values are
taken from the key record; for scan takeover update operations,
key column values are taken from the attribute record.
(Bug#42238)
Cluster API:
Ordered index scans using NdbRecord
formerly
expressed a BoundEQ
range as separate lower
and upper bounds, resulting in 2 copies of the column values
being sent to the NDB kernel.
Now, when a range is specified by
NdbScanOperation::setBound()
, the passed
pointers, key lengths, and inclusive bits are compared, and only
one copy of the equal key columns is sent to the kernel. This
makes such operations more efficient, as half the amount of
KeyInfo
is now sent for a
BoundEQ
range as before.
(Bug#38793)
Changes in MySQL Cluster NDB 6.4.3 (5.1.32-ndb-6.4.3)
Functionality added or changed:
A new data node configuration parameter
MaxLCPStartDelay
has been introduced to
facilitate parallel node recovery by causing a local checkpoint
to be delayed while recovering nodes are synchronizing data
dictionaries and other meta-information. For more information
about this parameter, see
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”.
(Bug#43053)
New options are introduced for ndb_restore for determining which tables or databases should be restored:
--include-tables
and
--include-databases
can be used to restore
specific tables or databases.
--exclude-tables
and
--exclude-databases
can be used to exclude
the specified tables or databases from being restored.
For more information about these options, see Section 17.4.17, “ndb_restore — Restore a MySQL Cluster Backup”. (Bug#40429)
Disk Data:
It is now possible to specify default locations for Disk Data
data files and undo log files, either together or separately,
using the data node configuration parameters
FileSystemPathDD
,
FileSystemPathDataFiles
, and
FileSystemPathUndoFiles
. For information
about these configuration parameters, see
Disk
Data filesystem parameters.
It is also now possible to specify a log file group, tablespace,
or both, that is created when the cluster is started, using the
InitialLogFileGroup
and
InitialTablespace
data node configuration
parameters. For information about these configuration
parameters, see
Disk
Data object creation parameters.
Bugs fixed:
Performance:
Updates of the SYSTAB_0
system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0
entry, creating a potential
performance bottleneck.
With this fix, inserts on NDB
tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
(Bug#39268)
Important Note:
It is not possible in this release to install the
InnoDB
plugin if
InnoDB
support has been compiled
into mysqld.
(Bug#42610)
This regression was introduced by Bug#29263.
Packaging:
Packages for MySQL Cluster were missing the
libndbclient.so
and
libndbclient.a
files.
(Bug#42278)
Partitioning:
Executing ALTER TABLE ... REORGANIZE
PARTITION
on an
NDBCLUSTER
table having only one
partition caused mysqld to crash.
(Bug#41945)
See also Bug#40389.
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug#43042)
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
LockPagesInMainMemory
set to 1.
(Bug#43021)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug#42973)
When using multi-threaded data nodes,
IndexMemory
,
MaxNoOfLocalOperations
, and
MaxNoOfLocalScans
were effectively multiplied
by the number of local query handlers in use by each
ndbmtd instance.
(Bug#42765)
See also Bug#42215.
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug#42753)
Triggers on NDBCLUSTER
tables
caused such tables to become locked.
(Bug#42751)
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug#42559)
A data node failure that occurred between calls to
NdbIndexScanOperation::readTuples(SF_OrderBy)
and NdbTransaction::Execute()
was not
correctly handled; a subsequent call to
nextResult()
caused a null pointer to be
deferenced, leading to a segfault in mysqld.
(Bug#42545)
If the cluster configuration cache file was larger than 32K, the management server would not start. (Bug#42543)
Issuing SHOW GLOBAL STATUS LIKE 'NDB%'
before
mysqld had connected to the cluster caused a
segmentation fault.
(Bug#42458)
When using ndbmtd for all data nodes, repeated failures of one data node during DML operations caused other data nodes to fail. (Bug#42450)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug#42422)
When using multi-threaded data nodes, their
DataMemory
and IndexMemory
usage as reported was multiplied by the number of local query
handlers (worker threads), making it appear that much more
memory was being used than was actually the case.
(Bug#42215)
See also Bug#42765.
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using --initial
, and
into which no data had yet been imported) and having an empty
backup directory, executing START BACKUP
with
a user-specified backup ID caused the data nodes to crash.
(Bug#41031)
In some cases, NDB
did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
server.
(Bug#40464)
Disk Data:
It was not possible to add an in-memory column online to a table
that used a table-level or column-level STORAGE
DISK
option. The same issue prevented ALTER
ONLINE TABLE ... REORGANIZE PARTITION
from working on
Disk Data tables.
(Bug#42549)
Disk Data:
Repeated insert and delete operations on disk-based tables could
lead to failures in the NDB Tablespace Manager
(TSMAN
kernel block).
(Bug#40344)
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug#39096)
Disk Data:
Trying to execute a CREATE LOGFILE
GROUP
statement using a value greater than
150M
for UNDO_BUFFER_SIZE
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE
is now
600M
; attempting to set a higher value now
fails gracefully with an error.
(Bug#34102)
See also Bug#36702.
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug#32662)
Disk Data:
Using a path or filename longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of CREATE
LOGFILE GROUP
, ALTER LOGFILE
GROUP
, CREATE
TABLESPACE
, and ALTER
TABLESPACE
statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug#31769, Bug#31770, Bug#31772)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a CREATE LOGFILE
GROUP
statement without an ADD
UNDOFILE
option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid CREATE
LOGFILE GROUP
statement; it is also possible to
create a logfile group without any undo log files using the
NDB API.
Cluster API:
Some error messages from ndb_mgmd contained
newline (\n
) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
(Bug#43104)
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug#42591)
Changes in MySQL Cluster NDB 6.4.2 (5.1.31-ndb-6.4.2)
Bugs fixed:
Connections using IPv6 were not handled correctly by mysqld. (Bug#42413)
When a cluster backup failed with Error 1304 (Node
node_id1
: Backup request from
node_id2
failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug#42354)
See also Bug#22698.
Issuing SHOW ENGINE
NDBCLUSTER STATUS
on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
(Bug#42264)
When using ndbmtd, setting
MaxNoOfThreads
to a value higher than the
actual number of cores available and with insufficient
SharedGlobalMemory
caused the data nodes to
crash.
The fix for this issue changes the behavior of
ndbmtd such that its internal job buffers no
longer rely on SharedGlobalMemory
.
(Bug#42254)
Changes in MySQL Cluster NDB 6.4.1 (5.1.31-ndb-6.4.1)
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs — that is, epochs in which no
changes to NDBCLUSTER
data or
tables took place — were inserted into the
ndb_apply_status
and
ndb_binlog_index
tables on the slave even
when --log-slave-updates
was
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” eopchs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs
option. For more
information, see Section 16.1.3.3, “Replication Slave Options and Variables”.
See also Bug#37472.
Bugs fixed:
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
The management server could hang after attempting to halt it
with the STOP
command in the management
client.
(Bug#42056)
See also Bug#40922.
When using ndbmtd, one thread could flood another thread, which would cause the system to stop with a job buffer full condition (currently implemented as an abort). This could be caused by committing or aborting a large transaction (50000 rows or more) on a single data node running ndbmtd. To prevent this from happening, the number of signals that can be accepted by the system threads is calculated before excuting them, and only executing them if sufficient space is found. (Bug#42052)
MySQL Cluster would not compile when using
libwrap
. This issue was known to occur only
in MySQL Cluster NDB 6.4.0.
(Bug#41918)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
When a data node connects to the management server, the node sends its node ID and transporter type; the management server then verifies that there is a transporter set up for that node and that it is in the correct state, and then sends back an acknowledgement to the connecting node. If the transporter was not in the correct state, no reply was sent back to the connecting node, which would then hang until a read timeout occurred (60 seconds). Now, if the transporter is not in the correct state, the management server acknowledges this promptly, and the node immediately disconnects. (Bug#41713)
See also Bug#41965.
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
If all data nodes were shut down, MySQL clients were unable to
access NDBCLUSTER
tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
(Bug#33626)
Changes in MySQL Cluster NDB 6.4.0 (5.1.30-ndb-6.4.0)
Functionality added or changed:
Important Change:
MySQL Cluster now caches its configuration data. This means
that, by default, the management server only reads the global
configuration file (usually named
config.ini
) the first time that it is
started, and does not automatically re-read the this file when
restarted. This behavior can be controlled using new management
server options (--config-dir
,
--initial
, and --reload
) that
have been added for this purpose. For more information, see
Section 17.3.2, “MySQL Cluster Configuration Files”, and
Section 17.4.4, “ndb_mgmd — The MySQL Cluster Management Server Daemon”.
It is now possible while in Single User Mode to restart all data
nodes using ALL RESTART
in the management
client. Restarting of individual nodes while in Single User Mode
remains disallowed.
(Bug#31056)
It is now possible to add data nodes to a MySQL Cluster online — that is, to a running MySQL Cluster without shutting it down.
For information about the procedure for adding data nodes online, see Section 17.5.11, “Adding MySQL Cluster Data Nodes Online”.
A multi-threaded version of the MySQL Cluster data node daemon is now available. The multi-threaded ndbmtd binary is similar to ndbd and functions in much the same way, but is intended for use on machines with multiple CPU cores.
For more information, see Section 17.4.3, “ndbmtd — The MySQL Cluster Data Node Daemon (Multi-Threaded)”.
It is now possible when performing a cluster backup to determine
whether the backup matches the state of the data when the backup
began or when it ended, using the new START
BACKUP
options SNAPSHOTSTART
and
SNAPSHOTEND
in the management client. See
Section 17.5.3.2, “Using The MySQL Cluster Management Client to Create a Backup”,
for more information.
Bugs fixed:
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug#41462)
When long signal buffer exhaustion in the ndbd process resulted in a signal being dropped, the usual handling mechanism did not take fragmented signals into account. This could result in a crash of the data node because the fragmented signal handling mechanism was not able to work with the missing fragments. (Bug#39235)
The failure of a master node during a DDL operation caused the cluster to be unavailable for further DDL operations until it was restarted; failures of nonmaster nodes during DLL operations caused the cluster to become completely inaccessible. (Bug#36718)
Status messages shown in the management client when restarting a
management node were inappropriate and misleading. Now, when
restarting a management node, the messages displayed are as
follows, where node_id
is the
management node's node ID:
ndb_mgm>Shutting down MGM node
node_id
RESTARTnode_id
for restart Nodenode_id
is being restarted ndb_mgm>
A data node failure when NoOfReplicas
was
greater than 2 caused all cluster SQL nodes to crash.
(Bug#18621)
User Comments
Add your own comment.