This section contains unified change history highlights for all
MySQL Cluster releases based on version 6.1 of the
NDBCLUSTER
storage engine through
MySQL Cluster NDB 5.1.15-ndb-6.1.23. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
Cluster API.
For an overview of features that were added in MySQL Cluster NDB 6.1, see Section 17.1.4.2, “MySQL Cluster Development in MySQL Cluster NDB 6.1”.
MySQL Cluster NDB 6.1 is no longer being developed or maintained, and the information presented in this section should be considered to be of historical interest only. If you are using MySQL Cluster NDB 6.1, you should upgrade as soon as possible to the most recent version of MySQL Cluster NDB 6.2 or later MySQL Cluster release series.
Changes in MySQL Cluster NDB 6.1.23 (5.1.15-ndb-6.1.23)
Bugs fixed:
The NDB
storage engine code was not
safe for strict-alias optimization in gcc
4.2.1.
(Bug#31761)
Changes in MySQL Cluster NDB 6.1.22 (5.1.15-ndb-6.1.22)
Bugs fixed:
It was possible in some cases for a node group to be “lost” due to missed local checkpoints following a system restart. (Bug#31525)
Changes in MySQL Cluster NDB 6.1.21 (5.1.15-ndb-6.1.21)
Bugs fixed:
Changes in MySQL Cluster NDB 6.1.19 (5.1.15-ndb-6.1.19)
Functionality added or changed:
Whenever a TCP send buffer is over 80% full, temporary error 1218 (Send Buffers overloaded in NDB kernel) is now returned. See SendBufferMemory for more information.
Changes in MySQL Cluster NDB 6.1.18 (5.1.15-ndb-6.1.18)
Bugs fixed:
When restarting a data node, queries could hang during that node's start phase 5, and continue only after the node had entered phase 6. (Bug#29364)
Disk Data: Disk data meta-information that existed in ndbd might not be visible to mysqld. (Bug#28720)
Disk Data: The number of free extents was incorrectly reported for some tablespaces. (Bug#28642)
Changes in MySQL Cluster NDB 6.1.17 (5.1.15-ndb-6.1.17)
Bugs fixed:
Replica redo logs were inconsistently handled during a system restart. (Bug#29354)
Changes in MySQL Cluster NDB 6.1.16 (5.1.15-ndb-6.1.16)
Bugs fixed:
When a node failed to respond to a COPY_GCI
signal as part of a global checkpoint, the master node was
killed instead of the node that actually failed.
(Bug#29331)
An invalid comparison made during REDO
validation that could lead to an Error while reading
REDO log condition.
(Bug#29118)
The wrong data pages were sometimes invalidated following a global checkpoint. (Bug#29067)
If at least 2 files were involved in REDO
invalidation, then file 0 of page 0 was not updated and so
pointed to an invalid part of the redo log.
(Bug#29057)
Disk Data: When dropping a page, the stack's bottom entry could sometime be left “cold” rather than “hot”, violating the rules for stack pruning. (Bug#29176)
Changes in MySQL Cluster NDB 6.1.15 (5.1.15-ndb-6.1.15)
Bugs fixed:
Memory corruption could occur due to a problem in the
DBTUP
kernel block.
(Bug#29229)
Changes in MySQL Cluster NDB 6.1.14 (5.1.15-ndb-6.1.14)
Bugs fixed:
In the event that two data nodes in the same node group and
participating in a GCP crashed before they had written their
respective P0.sysfile
files,
QMGR
could refuse to start, issuing an
invalid Insufficient nodes for restart
error instead.
(Bug#29167)
Changes in MySQL Cluster NDB 6.1.13 (5.1.15-ndb-6.1.13)
Bugs fixed:
Cluster API:
NdbApi.hpp
depended on
ndb_global.h
, which was not actually
installed, causing the compilation of programs that used
NdbApi.hpp
to fail.
(Bug#35853)
Changes in MySQL Cluster NDB 6.1.12 (5.1.15-ndb-6.1.12)
Functionality added or changed:
Bugs fixed:
It is now possible to set the maximum size of the allocation
unit for table memory using the MaxAllocate
configuration parameter.
(Bug#29044)
Changes in MySQL Cluster NDB 6.1.11 (5.1.15-ndb-6.1.11)
Functionality added or changed:
Important Change:
The TimeBetweenWatchdogCheckInitial
configuration parameter was added to allow setting of a separate
watchdog timeout for memory allocation during startup of the
data nodes. See Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”,
for more information.
(Bug#28899)
A new configuration parameter ODirect
causes
NDB
to attempt using
O_DIRECT
writes for LCP, backups, and redo
logs, often lowering CPU usage.
Bugs fixed:
Having large amounts of memory locked caused swapping to disk. (Bug#28751)
LCP files were not removed following an initial system restart. (Bug#28726)
Disk Data:
Repeated INSERT
and
DELETE
operations on a Disk Data
table having one or more large
VARCHAR
columns could cause data
nodes to fail.
(Bug#20612)
Changes in MySQL Cluster NDB 6.1.10 (5.1.15-ndb-6.1.10)
Functionality added or changed:
A new times
printout was added in the
ndbd watchdog thread.
Some unneeded printouts in the ndbd out file were removed.
Bugs fixed:
A regression in the heartbeat monitoring code could lead to node failure under high load. This issue affected MySQL 5.1.19 and MySQL Cluster NDB 6.1.10 only. (Bug#28783)
A corrupt schema file could cause a File already open error. (Bug#28770)
Setting InitialNoOpenFiles
equal to
MaxNoOfOpenFiles
caused an error. This was
due to the fact that the actual value of
MaxNoOfOpenFiles
as used by the cluster was
offset by 1 from the value set in
config.ini
.
(Bug#28749)
A race condition could result when nonmaster nodes (in addition
to the master node) tried to update active status due to a local
checkpoint (that is, between NODE_FAILREP
and
COPY_GCIREQ
events). Now only the master
updates the active status.
(Bug#28717)
A fast global checkpoint under high load with high usage of the redo buffer caused data nodes to fail. (Bug#28653)
Disk Data:
When loading data into a cluster following a version upgrade,
the data nodes could forcibly shut down due to page and buffer
management failures (that is, ndbrequire
failures in PGMAN
).
(Bug#28525)
Changes in MySQL Cluster NDB 6.1.9 (5.1.15-ndb-6.1.9)
Bugs fixed:
Changes in MySQL Cluster NDB 6.1.8 (5.1.15-ndb-6.1.8)
Bugs fixed:
Local checkpoint files relating to dropped
NDB
tables were not removed.
(Bug#28348)
Repeated insertion of data generated by
mysqldump into
NDB
tables could eventually lead to
failure of the cluster.
(Bug#27437)
Disk Data: Extremely large inserts into Disk Data tables could lead to data node failure in some circumstances. (Bug#27942)
Cluster API:
In a multi-operation transaction, a delete operation followed by
the insertion of an implicit NULL
failed to
overwrite an existing value.
(Bug#20535)
Changes in MySQL Cluster NDB 6.1.7 (5.1.15-ndb-6.1.7)
Functionality added or changed:
Cluster Replication: Incompatible Change:
The schema for the ndb_apply_status
table in
the mysql
system database has changed. When
upgrading to this release from a previous MySQL Cluster NDB 6.x
or mainline MySQL 5.1 release, you must drop the
mysql.ndb_apply_status
table, then restart
the server in order for the table to be re-created with the new
schema.
See Section 17.6.4, “MySQL Cluster Replication Schema and Tables”, for additional information.
Bugs fixed:
The cluster waited 30 seconds instead of 30 milliseconds before reading table statistics. (Bug#28093)
Under certain rare circumstances, ndbd could get caught in an infinite loop when one transaction took a read lock and then a second transaction attempted to obtain a write lock on the same tuple in the lock queue. (Bug#28073)
Under some circumstances, a node restart could fail to update the Global Checkpoint Index (GCI). (Bug#28023)
An INSERT
followed by a delete
DELETE
on the same
NDB
table caused a memory leak.
(Bug#27756)
This regression was introduced by Bug#20612.
Under certain rare circumstances performing a
DROP TABLE
or
TRUNCATE TABLE
on an
NDB
table could cause a node
failure or forced cluster shutdown.
(Bug#27581)
Memory usage of a mysqld process grew even while idle. (Bug#27560)
Performing a delete followed by an insert during a local checkpoint could cause a Rowid already allocated error. (Bug#27205)
Cluster Replication: Disk Data: An issue with replication of Disk Data tables could in some cases lead to node failure. (Bug#28161)
Disk Data: Changes to a Disk Data table made as part of a transaction could not be seen by the client performing the changes until the transaction had been committed. (Bug#27757)
Disk Data: When restarting a data node following the creation of a large number of Disk Data objects (approximately 200 such objects), the cluster could not assign a node ID to the restarting node. (Bug#25741)
Disk Data:
Changing a column specification or issuing a
TRUNCATE TABLE
statement on a
Disk Data table caused the table to become an in-memory table.
This fix supersedes an incomplete fix that was made for this issue in MySQL 5.1.15. (Bug#24667, Bug#25296)
Cluster API:
An issue with the way in which the
NdbDictionary::Dictionary::listEvents()
method freed resources could sometimes lead to memory
corruption.
(Bug#27663)
Changes in MySQL Cluster NDB 6.1.6 (5.1.15-ndb-6.1.6)
Functionality added or changed:
Cluster Replication: Incompatible Change:
The schema for the ndb_apply_status
table in
the mysql
system database has changed. When
upgrading to this release from a previous MySQL Cluster NDB 6.x
or mainline MySQL 5.1 release, you must drop the
mysql.ndb_apply_status
table, then restart
the server in order for the table to be re-created with the new
schema.
See Section 17.6.4, “MySQL Cluster Replication Schema and Tables”, for additional information.
Bugs fixed:
A data node failing while another data node was restarting could leave the cluster in an inconsistent state. In certain rare cases, this could lead to a race condition and the eventual forced shutdown of the cluster. (Bug#27466)
It was not possible to set
LockPagesInMainMemory
equal to
0
.
(Bug#27291)
A race condition could sometimes occur if the node acting as master failed while node IDs were still being allocated during startup. (Bug#27286)
When a data node was taking over as the master node, a race condition could sometimes occur as the node was assuming responsibility for handling of global checkpoints. (Bug#27283)
mysqld could crash shortly after a data node failure following certain DML operations. (Bug#27169)
The same failed request from an API node could be handled by the cluster multiple times, resulting in reduced performance. (Bug#27087)
The failure of a data node while restarting could cause other data nodes to hang or crash. (Bug#27003)
mysqld processes would sometimes crash under high load.
This fix improves on and replaces a fix for this bug that was made in MySQL Cluster NDB 6.1.5.
Disk Data:
DROP INDEX
on a Disk Data table
did not always move data from memory into the tablespace.
(Bug#25877)
Cluster API:
An issue with the way in which the
NdbDictionary::Dictionary::listEvents()
method freed resources could sometimes lead to memory
corruption.
(Bug#27663)
Cluster API: A delete operation using a scan followed by an insert using a scan could cause a data node to fail. (Bug#27203)
Changes in MySQL Cluster NDB 6.1.5 (5.1.15-ndb-6.1.5)
Functionality added or changed:
Cluster Replication: Incompatible Change:
The schema for the ndb_apply_status
table in
the mysql
system database has changed. When
upgrading to this release from a previous MySQL Cluster NDB 6.x
or mainline MySQL 5.1 release, you must drop the
mysql.ndb_apply_status
table, then restart
the server in order for the table to be re-created with the new
schema.
See Section 17.6.4, “MySQL Cluster Replication Schema and Tables”, for additional information.
Bugs fixed:
Creating a table on one SQL node while in single user mode caused other SQL nodes to crash. (Bug#26997)
mysqld processes would sometimes crash under high load.
This fix was reverted in MySQL Cluster NDB 6.1.6.
An infinite loop in an internal logging function could cause trace logs to fill up with Unknown Signal type error messages and thus grow to unreasonable sizes. (Bug#26720)
Disk Data:
When creating a log file group, setting
INITIAL_SIZE
to less than
UNDO_BUFFER_SIZE
caused data nodes to crash.
(Bug#25743)
Changes in MySQL Cluster NDB 6.1.4 (5.1.15-ndb-6.1.4)
Functionality added or changed:
An ndb_wait_connected
system
variable has been added for mysqld. It causes
mysqld wait a specified amount of time to be
connected to the cluster before accepting client connections.
For more information, see
Section 17.3.4.3, “MySQL Cluster System Variables”.
Cluster API:
It is now possible to specify the transaction coordinator when
starting a transaction. See
Ndb::startTransaction()
, for more
information.
Cluster API:
It is now possible to iterate over all existing
NDB
objects using three new methods
of the Ndb_cluster_connection
class:
lock_ndb_objects()
get_next_ndb_object()
unlock_ndb_objects()
For more information about these methods and their use, see
ndb_cluster_connection::get_next_ndb_object()
,
in the MySQL Cluster API Guide.
Bugs fixed:
Changes in MySQL Cluster NDB 6.1.3 (5.1.15-ndb-6.1.3)
Functionality added or changed:
The ndbd_redo_log_reader utility is now part of the default build. For more information, see Section 17.4.16, “ndbd_redo_log_reader — Check and Print Content of Cluster Redo Log”.
The ndb_show_tables utility now displays information about table events. See Section 17.4.20, “ndb_show_tables — Display List of NDB Tables”, for more information.
Cluster API:
A new listEvents()
method has been added to
the Dictionary
class. See
Dictionary::listEvents()
, for more
information.
Bugs fixed:
An invalid pointer was returned following a
FSCLOSECONF
signal when accessing the REDO
logs during a node restart or system restart.
(Bug#26515)
The InvalidUndoBufferSize error used the same error code (763) as the IncompatibleVersions error. InvalidUndoBufferSize now uses its own error code (779). (Bug#26490)
The failure of a data node when restarting it with
--initial
could lead to failures of subsequent
data node restarts.
(Bug#26481)
Takeover for local checkpointing due to multiple failures of master nodes was sometimes incorrectly handled. (Bug#26457)
The LockPagesInMainMemory
parameter was not
read until after distributed communication had already started
between cluster nodes. When the value of this parameter was
1
, this could sometimes result in data node
failure due to missed heartbeats.
(Bug#26454)
Under some circumstances, following the restart of a management node, all data nodes would connect to it normally, but some of them subsequently failed to log any events to the management node. (Bug#26293)
No appropriate error message was provided when there was insufficient REDO log file space for the cluster to start. (Bug#25801)
A memory allocation failure in SUMA
(the
cluster Subscription Manager) could cause the cluster to crash.
(Bug#25239)
The message Error 0 in readAutoIncrementValue(): no
Error was written to the error log whenever
SHOW TABLE STATUS
was performed
on a Cluster table that did not have an
AUTO_INCREMENT
column.
This improves on and supersedes an earlier fix that was made for this issue in MySQL 5.1.12.
Disk Data: A memory overflow could occur with tables having a large amount of data stored on disk, or with queries using a very high degree of parallelism on Disk Data tables. (Bug#26514)
Disk Data:
Use of a tablespace whose INITIAL_SIZE
was
greater than 1 GB could cause the cluster to crash.
(Bug#26487)
Changes in MySQL Cluster NDB 6.1.2 (5.1.15-ndb-6.1.2)
Bugs fixed:
Using node IDs greater than 48 could sometimes lead to incorrect memory access and a subsequent forced shutdown of the cluster. (Bug#26267)
Changes in MySQL Cluster NDB 6.1.1 (5.1.15-ndb-6.1.1)
Functionality added or changed:
A single cluster can now support up to 255 API nodes, including MySQL servers acting as SQL nodes. See Section 17.1.5.8, “Issues Exclusive to MySQL Cluster”, for more information.
Bugs fixed:
A memory leak could cause problems during a node or cluster shutdown or failure. (Bug#25997)
Cluster API: Disk Data:
A delete and a read performed in the same operation could cause
one or more data nodes to crash. This could occur when the
operation affected more than 5 columns concurrently, or when one
or more of the columns was of the
VARCHAR
type and was stored on
disk.
(Bug#25794)
Cluster API: Disk Data:
A delete and a read performed in the same operation could cause
one or more data nodes to crash. This could occur when the
operation affected more than 5 columns concurrently, or when one
or more of the columns was of the
VARCHAR
type and was stored on
disk.
(Bug#25794)
Changes in MySQL Cluster NDB 6.1.0 (5.1.14-ndb-6.1.0)
Functionality added or changed:
A new configuration parameter
MemReportFrequency
allows for additional
control of data node memory usage. Previously, only warnings at
predetermined percentages of memory allocation were given;
setting this parameter allows for that behavior to be
overridden. For more information, see
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”.
Bugs fixed:
When a data node was shut down using the management client
STOP
command, a connection event
(NDB_LE_Connected
) was logged instead of a
disconnection event (NDB_LE_Disconnected
).
(Bug#22773)
SELECT
statements with a
BLOB
or
TEXT
column in the selected
column list and a WHERE
condition including a
primary key lookup on a VARCHAR
primary key produced empty result sets.
(Bug#19956)
Disk Data:
MEDIUMTEXT
columns of Disk Data
tables were stored in memory rather than on disk, even if the
columns were not indexed.
(Bug#25001)
Disk Data: Performing a node restart with a newly dropped Disk Data table could lead to failure of the node during the restart. (Bug#24917)
Disk Data: When restoring from backup a cluster containing any Disk Data tables with hidden primary keys, a node failure resulted which could lead to a crash of the cluster. (Bug#24166)
Disk Data:
Repeated CREATE
, DROP
, or
TRUNCATE TABLE
in various
combinations with system restarts between these operations could
lead to the eventual failure of a system restart.
(Bug#21948)
Disk Data:
Extents that should have been available for re-use following a
DROP TABLE
operation were not
actually made available again until after the cluster had
performed a local checkpoint.
(Bug#17605)
Cluster API:
Invoking the NdbTransaction::execute()
method
using execution type Commit
and abort option
AO_IgnoreError
could lead to a crash of the
transaction coordinator (DBTC
).
(Bug#25090)
Cluster API: A unique index lookup on a nonexistent tuple could lead to a data node timeout (error 4012). (Bug#25059)
Cluster API:
When using the NdbTransaction::execute()
method, a very long timeout (greater than 5 minutes) could
result if the last data node being polled was disconnected from
the cluster.
(Bug#24949)
Cluster API: Due to an error in the computation of table fragment arrays, some transactions were not executed from the correct starting point. (Bug#24914)
User Comments
Add your own comment.