This section contains unified change history highlights for all
MySQL Cluster releases based on version 6.2 of the
NDBCLUSTER
storage engine through
MySQL Cluster NDB 6.2.19. Included are all
changelog entries in the categories MySQL
Cluster, Disk Data, and
Cluster API.
For an overview of features that were added in MySQL Cluster NDB 6.2, see Section 17.1.4.3, “MySQL CLuster Development in MySQL Cluster NDB 6.2”.
Changes in MySQL Cluster NDB 6.2.18 (5.1.34-ndb-6.2.18)
Bugs fixed:
Important Change: Partitioning:
User-defined partitioning of an
NDBCLUSTER
table without any
primary key sometimes failed, and could cause
mysqld to crash.
Now, if you wish to create an
NDBCLUSTER
table with user-defined
partitioning, the table must have an explicit primary key, and
all columns listed in the partitioning expression must be part
of the primary key. The hidden primary key used by the
NDBCLUSTER
storage engine is not
sufficient for this purpose. However, if the list of columns is
empty (that is, the table is defined using PARTITION BY
[LINEAR] KEY()
), then no explicit primary key is
required.
This change does not effect the partitioning of tables using any
storage engine other than
NDBCLUSTER
.
(Bug#40709)
An internal NDB API buffer was not properly initialized. (Bug#44977)
When a data node had written its GCI marker to the first page of a megabyte, and that node was later killed during restart after having processed that page (marker) but before completing a LCP, the data node could fail with filesystem errors. (Bug#44952)
Inspection of the code revealed that several assignment
operators (=
) were used in place of
comparison operators (==
) in
DbdihMain.cpp
.
(Bug#44567)
See also Bug#44570.
It was possible for NDB API applications to insert corrupt data into the database, which could subquently lead to data node crashes. Now, stricter checking is enforced on input data for inserts and updates. (Bug#44132)
TransactionDeadlockDetectionTimeout
values
less than 100 were treated as 100. This could cause scans to
time out unexpectedly.
(Bug#44099)
The file ndberror.c
contained a C++-style
comment, which caused builds to fail with some C compilers.
(Bug#44036)
A race condition could occur when a data node failed to restart just before being included in the next global checkpoint. This could cause other data nodes to fail. (Bug#43888)
When trying to use a data node with an older version of the management server, the data node crashed on startup. (Bug#43699)
Using indexes containing variable-sized columns could lead to internal errors when the indexes were being built. (Bug#43226)
In some cases, data node restarts during a system restart could fail due to insufficient redo log space. (Bug#43156)
Some queries using combinations of logical and comparison
operators on an indexed column in the WHERE
clause could fail with the error Got error 4541
'IndexBound has no bound information' from
NDBCLUSTER.
(Bug#42857)
ndb_restore --print_data
did
not handle DECIMAL
columns
correctly.
(Bug#37171)
The output of ndbd --help
did not provide clear information about the program's
--initial
and --initial-start
options.
(Bug#28905)
It was theoretically possible for the value of a nonexistent
column to be read as NULL
, rather than
causing an error.
(Bug#27843)
When aborting an operation involving both an insert and a delete, the insert and delete were aborted separately. This was because the transaction coordinator did not know that the operations affected on same row, and, in the case of a committed-read (tuple or index) scan, the abort of the insert was performed first, then the row was examined after the insert was aborted but before the delete was aborted. In some cases, this would leave the row in a inconsistent state. This could occur when a local checkpoint was performed during a backup. This issue did not affect primary ley operations or scans that used locks (these are serialized).
After this fix, for ordered indexes, all operations that follow the operation to be aborted are now also aborted.
Disk Data: Partitioning:
An NDBCLUSTER
table created with a
very large value for the MAX_ROWS
option
could — if this table was dropped and a new table with
fewer partitions, but having the same table ID, was created
— cause ndbd to crash when performing a
system restart. This was because the server attempted to examine
each partition whether or not it actually existed.
(Bug#45154)
Disk Data: During a checkpoint, restore points are created for both the on-disk and in-memory parts of a Disk Data table. Under certain rare conditions, the in-memory restore point could include or exclude a row that should have been in the snapshot. This would later lead to a crash during or following recovery. (Bug#41915)
See also Bug#47832.
Disk Data: When a log file group had an undo log file whose size was too small, restarting data nodes failed with Read underflow errors.
As a result of this fix, the minimum allowed
INTIAL_SIZE
for an undo log file is now
1M
(1 megabyte).
(Bug#29574)
Disk Data: This fix supercedes and improves on an earlier fix made for this bug in MySQL 5.1.18. (Bug#24521)
Cluster API:
If the largest offset of a
RecordSpecification
used for an
NdbRecord
object was for the NULL
bits (and thus not a
column), this offset was not taken into account when calculating
the size used for the RecordSpecification
.
This meant that the space for the NULL
bits
could be overwritten by key or other information.
(Bug#43891)
Cluster API:
The default NdbRecord
structures created by
NdbDictionary
could have overlapping null
bits and data fields.
(Bug#43590)
Cluster API:
When performing insert or write operations,
NdbRecord
allows key columns to be specified
in both the key record and in the attribute record. Only one key
column value for each key column should be sent to the NDB
kernel, but this was not guaranteed. This is now ensured as
follows: For insert and write operations, key column values are
taken from the key record; for scan takeover update operations,
key column values are taken from the attribute record.
(Bug#42238)
Cluster API:
Ordered index scans using NdbRecord
formerly
expressed a BoundEQ
range as separate lower
and upper bounds, resulting in 2 copies of the column values
being sent to the NDB kernel.
Now, when a range is specified by
NdbScanOperation::setBound()
, the passed
pointers, key lengths, and inclusive bits are compared, and only
one copy of the equal key columns is sent to the kernel. This
makes such operations more efficient, as half the amount of
KeyInfo
is now sent for a
BoundEQ
range as before.
(Bug#38793)
Changes in MySQL Cluster NDB 6.2.17 (5.1.32-ndb-6.2.17)
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
Disk Data:
It is now possible to specify default locations for Disk Data
data files and undo log files, either together or separately,
using the data node configuration parameters
FileSystemPathDD
,
FileSystemPathDataFiles
, and
FileSystemPathUndoFiles
. For information
about these configuration parameters, see
Disk
Data filesystem parameters.
It is also now possible to specify a log file group, tablespace,
or both, that is created when the cluster is started, using the
InitialLogFileGroup
and
InitialTablespace
data node configuration
parameters. For information about these configuration
parameters, see
Disk
Data object creation parameters.
Bugs fixed:
Performance:
Updates of the SYSTAB_0
system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0
entry, creating a potential
performance bottleneck.
With this fix, inserts on NDB
tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
(Bug#39268)
Packaging:
Packages for MySQL Cluster were missing the
libndbclient.so
and
libndbclient.a
files.
(Bug#42278)
Partitioning:
Executing ALTER TABLE ... REORGANIZE
PARTITION
on an
NDBCLUSTER
table having only one
partition caused mysqld to crash.
(Bug#41945)
See also Bug#40389.
Cluster API:
Failed operations on BLOB
and
TEXT
columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
space.
(Bug#39867, Bug#39879)
Cluster API:
Failed operations on BLOB
and
TEXT
columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
space.
(Bug#39867, Bug#39879)
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug#43042)
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
LockPagesInMainMemory
set to 1.
(Bug#43021)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug#42973)
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug#42753)
Triggers on NDBCLUSTER
tables
caused such tables to become locked.
(Bug#42751)
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug#42559)
A data node failure that occurred between calls to
NdbIndexScanOperation::readTuples(SF_OrderBy)
and NdbTransaction::Execute()
was not
correctly handled; a subsequent call to
nextResult()
caused a null pointer to be
deferenced, leading to a segfault in mysqld.
(Bug#42545)
Issuing SHOW GLOBAL STATUS LIKE 'NDB%'
before
mysqld had connected to the cluster caused a
segmentation fault.
(Bug#42458)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug#42422)
When a cluster backup failed with Error 1304 (Node
node_id1
: Backup request from
node_id2
failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug#42354)
See also Bug#22698.
Issuing SHOW ENGINE
NDBCLUSTER STATUS
on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
(Bug#42264)
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug#41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug#41462)
An abort path in the DBLQH
kernel block
failed to release a commit acknowledgement marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
(Bug#41296)
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED
signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the DBTC
kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
(Bug#41295)
If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug#41214)
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using --initial
, and
into which no data had yet been imported) and having an empty
backup directory, executing START BACKUP
with
a user-specified backup ID caused the data nodes to crash.
(Bug#41031)
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug#40734)
Transaction failures took longer to handle than was necessary.
When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol — that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug#40697)
It was theoretically possible for stale data to be read from
NDBCLUSTER
tables when the
transaction isolation level was set to
ReadCommitted
.
(Bug#40543)
In some cases, NDB
did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
server.
(Bug#40464)
Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug#40346)
O_DIRECT
was incorrectly disabled when making
MySQL Cluster backups.
(Bug#40205)
Events logged after setting ALL CLUSTERLOG
STATISTICS=15
in the management client did not always
include the node ID of the reporting node.
(Bug#39839)
Start phase reporting was inconsistent between the management client and the cluster log. (Bug#39667)
The MySQL Query Cache did not function correctly with
NDBCLUSTER
tables containing
TEXT
columns.
(Bug#39295)
A segfault in Logger::Log
caused
ndbd to hang indefinitely. This fix improves
on an earlier one for this issue, first made in MySQL Cluster
NDB 6.2.16 and MySQL Cluster NDB 6.3.17.
(Bug#39180)
See also Bug#38609.
Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug#38662)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
A duplicate key or other error raised when inserting into an
NDBCLUSTER
table caused the current
transaction to abort, after which any SQL statement other than a
ROLLBACK
failed. With this fix, the
NDBCLUSTER
storage engine now
performs an implicit rollback when a transaction is aborted in
this way; it is no longer necessary to issue an explicit
ROLLBACK
statement, and the next statement that is issued automatically
begins a new transaction.
It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.
See also Bug#47654.
Error messages for NDBCLUSTER
error
codes 1224 and 1227 were missing.
(Bug#28496)
Disk Data:
It was not possible to add an in-memory column online to a table
that used a table-level or column-level STORAGE
DISK
option. The same issue prevented ALTER
ONLINE TABLE ... REORGANIZE PARTITION
from working on
Disk Data tables.
(Bug#42549)
Disk Data:
Issuing concurrent CREATE TABLESPACE
,
ALTER TABLESPACE
, CREATE LOGFILE
GROUP
, or ALTER LOGFILE GROUP
statements on separate SQL nodes caused a resource leak that led
to data node crashes when these statements were used again
later.
(Bug#40921)
Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug#39645)
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug#39096)
Disk Data: Creation of a tablespace data file whose size was greater than 4 GB failed silently on 32-bit platforms. (Bug#37116)
See also Bug#29186.
Disk Data:
O_SYNC
was incorrectly disabled on platforms
that do not support O_DIRECT
. This issue was
noted on Solaris but could have affected other platforms not
having O_DIRECT
capability.
(Bug#34638)
Disk Data:
Trying to execute a CREATE LOGFILE
GROUP
statement using a value greater than
150M
for UNDO_BUFFER_SIZE
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE
is now
600M
; attempting to set a higher value now
fails gracefully with an error.
(Bug#34102)
See also Bug#36702.
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug#32662)
Disk Data:
Using a path or filename longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of CREATE
LOGFILE GROUP
, ALTER LOGFILE
GROUP
, CREATE
TABLESPACE
, and ALTER
TABLESPACE
statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug#31769, Bug#31770, Bug#31772)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN
kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
(Bug#28077)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a CREATE LOGFILE
GROUP
statement without an ADD
UNDOFILE
option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid CREATE
LOGFILE GROUP
statement; it is also possible to
create a logfile group without any undo log files using the
NDB API.
Cluster API:
Some error messages from ndb_mgmd contained
newline (\n
) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
(Bug#43104)
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug#42591)
Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug#40455)
Cluster API:
It was not always possible to access parent objects directly
from NdbBlob
,
NdbOperation
, and
NdbScanOperation
objects. To alleviate this
problem, a new getNdbOperation()
method has
been added to NdbBlob
and new
getNdbTransaction() methods have been added to
NdbOperation
and
NdbScanOperation
. In addition, a const
variant of NdbOperation::getErrorLine()
is
now also available.
(Bug#40242)
Cluster API:
NdbScanOperation::getBlobHandle()
failed when
used with incorrect column names or numbers.
(Bug#40241)
Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug#37491)
See also Bug#40238.
Cluster API:
mgmapi.h
contained constructs which only
worked in C++, but not in C.
(Bug#27004)
Changes in MySQL Cluster NDB 6.2.16 (5.1.28-ndb-6.2.16)
Functionality added or changed:
It is no longer a requirement for database autodiscovery that an
SQL node already be connected to the cluster at the time that a
database is created on another SQL node. It is no longer
necessary to issue CREATE
DATABASE
(or
CREATE
SCHEMA
) statements on an SQL node joining the cluster
after a database is created in order for the new SQL node to see
the database and any NDCLUSTER
tables that it
contains.
(Bug#39612)
Bugs fixed:
Heavy DDL usage caused the mysqld processes
to hang due to a timeout error (NDB
error code 266).
(Bug#39885)
Executing EXPLAIN
SELECT
on an NDBCLUSTER
table could cause mysqld to crash.
(Bug#39872)
Starting the MySQL Server with the
--ndbcluster
option plus an
invalid command-line option (for example, using
mysqld --ndbcluster
--foobar
) caused it to hang while shutting down the
binlog thread.
(Bug#39635)
Dropping and then re-creating a database on one SQL node caused other SQL nodes to hang. (Bug#39613)
Setting a low value of MaxNoOfLocalScans
(< 100) and performing a large number of (certain) scans
could cause the Transaction Coordinator to run out of scan
fragment records, and then crash. Now when this resource is
exhausted, the cluster returns Error 291 (Out of
scanfrag records in TC (increase MaxNoOfLocalScans))
instead.
(Bug#39549)
Creating a unique index on an
NDBCLUSTER
table caused a memory
leak in the NDB
subscription
manager (SUMA
) which could lead to mysqld
hanging, due to the fact that the resource shortage was not
reported back to the NDB
kernel
correctly.
(Bug#39518)
See also Bug#39450.
Unique identifiers in tables having no primary key were not
cached. This fix has been observed to increase the efficiency of
INSERT
operations on such tables
by as much as 50%.
(Bug#39267)
MgmtSrvr::allocNodeId()
left a mutex locked
following an Ambiguity for node if %d
error.
(Bug#39158)
An invalid path specification caused mysql-test-run.pl to fail. (Bug#39026)
During transactional coordinator takeover (directly after node
failure), the LQH finding an operation in the
LOG_COMMIT
state sent an
LQH_TRANS_CONF
signal twice, causing the TC
to fail.
(Bug#38930)
An invalid memory access caused the management server to crash on Solaris Sparc platforms. (Bug#38628)
A segfault in Logger::Log
caused
ndbd to hang indefinitely.
(Bug#38609)
ndb_mgmd failed to start on older Linux distributions (2.4 kernels) that did not support e-polling. (Bug#38592)
When restarting a data node, an excessively long shutodwn message could cause the node process to crash. (Bug#38580)
ndb_mgmd sometimes performed unnecessary network I/O with the client. This in combination with other factors led to long-running threads that were attempting to write to clients that no longer existed. (Bug#38563)
ndb_restore failed with a floating point exception due to a division by zero error when trying to restore certain data files. (Bug#38520)
A failed connection to the management server could cause a resource leak in ndb_mgmd. (Bug#38424)
Failure to parse configuration parameters could cause a memory leak in the NDB log parser. (Bug#38380)
After a forced shutdown and initial restart of the cluster, it
was possible for SQL nodes to retain .frm
files corresponding to NDBCLUSTER
tables that had been dropped, and thus to be unaware that these
tables no longer existed. In such cases, attempting to re-create
the tables using CREATE TABLE IF NOT EXISTS
could fail with a spurious Table ... doesn't
exist error.
(Bug#37921)
Renaming an NDBCLUSTER
table on one
SQL node, caused a trigger on this table to be deleted on
another SQL node.
(Bug#36658)
Attempting to add a UNIQUE INDEX
twice to an
NDBCLUSTER
table, then deleting
rows from the table could cause the MySQL Server to crash.
(Bug#35599)
ndb_restore failed when a single table was specified. (Bug#33801)
GCP_COMMIT
did not wait for transaction
takeover during node failure. This could cause
GCP_SAVE_REQ
to be executed too early. This
could also cause (very rarely) replication to skip rows.
(Bug#30780)
Cluster API:
Passing a value greater than 65535 to
NdbInterpretedCode::add_val()
and
NdbInterpretedCode::sub_val()
caused these
methods to have no effect.
(Bug#39536)
Cluster API:
The NdbScanOperation::readTuples()
method
could be called multiple times without error.
(Bug#38717)
Cluster API:
Certain Multi-Range Read scans involving IS
NULL
and IS NOT NULL
comparisons
failed with an error in the NDB
local query handler.
(Bug#38204)
Cluster API:
Problems with the public headers prevented
NDB
applications from being built
with warnings turned on.
(Bug#38177)
Cluster API:
Creating an NdbScanFilter
object using an
NdbScanOperation
object that had not yet had
its readTuples()
method called resulted in a
crash when later attempting to use the
NdbScanFilter
.
(Bug#37986)
Cluster API:
Executing an NdbRecord
interpreted delete
created with an ANYVALUE
option caused the
transaction to abort.
(Bug#37672)
Cluster API:
Accesing the debug version of libndbclient
via dlopen()
resulted in a segmentation
fault.
(Bug#35927)
Changes in MySQL Cluster NDB 6.2.14 (5.1.23-ndb-6.2.14)
Functionality added or changed:
Added the MaxBufferedEpochs
data node
configuration parameter, which controls the maximum number of
unprocessed epochs by which a subscribing node can lag.
Subscribers which exceed this number are disconnected and forced
to reconnect.
See Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”, for more information.
Bugs fixed:
Incompatible Change:
The UPDATE
statement allowed
NULL
to be assigned to NOT
NULL
columns (the implicit default value for the
column data type was assigned). This was changed so that on
error occurs.
This change was reverted, because the original report was
determined not to be a bug: Assigning NULL
to
a NOT NULL
column in an
UPDATE
statement should produce
an error only in strict SQL mode and set the column to the
implicit default with a warning otherwise, which was the
original behavior. See Section 10.1.4, “Data Type Default Values”, and
Bug#39265.
(Bug#33699)
Cluster API: Closing a scan before it was executed caused the application to segfault. (Bug#36375)
Cluster API:
Using NDB API applications from older MySQL Cluster versions
with libndbclient
from newer ones caused the
cluster to fail.
(Bug#36124)
Cluster API: Scans having no bounds set were handled incorrectly. (Bug#35876)
Changes in MySQL Cluster NDB 6.2.13 (5.1.23-ndb-6.2.13)
Bugs fixed:
A node failure during an initial node restart followed by another node start could cause the master data node to fail, because it incorrectly gave the node permission to start even if the invalidated node's LCP was still running. (Bug#34702)
Changes in MySQL Cluster NDB 6.2.12 (5.1.23-ndb-6.2.12)
Bugs fixed:
Upgrades of a cluster using while a
DataMemory
setting in excess of 16 GB caused
data nodes to fail.
(Bug#34378)
Performing many SQL statements on
NDB
tables while in
autocommit
mode caused a memory
leak in mysqld.
(Bug#34275)
In certain rare circumstances, a race condition could occur between an aborted insert and a delete leading a data node crash. (Bug#34260)
Multi-table updates using ordered indexes during handling of node failures could cause other data nodes to fail. (Bug#34216)
When configured with NDB
support,
MySQL failed to compile using gcc 4.3 on
64bit FreeBSD systems.
(Bug#34169)
The failure of a DDL statement could sometimes lead to node failures when attempting to execute subsequent DDL statements. (Bug#34160)
Extremely long SELECT
statements
(where the text of the statement was in excess of 50000
characters) against NDB
tables
returned empty results.
(Bug#34107)
Statements executing multiple inserts performed poorly on
NDB
tables having
AUTO_INCREMENT
columns.
(Bug#33534)
The ndb_waiter utility polled ndb_mgmd excessively when obtaining the status of cluster data nodes. (Bug#32025)
See also Bug#32023.
Transaction atomicity was sometimes not preserved between reads and inserts under high loads. (Bug#31477)
Having tables with a great many columns could cause Cluster backups to fail. (Bug#30172)
Cluster Replication: Disk Data:
Statements violating unique keys on Disk Data tables (such as
attempting to insert NULL
into a NOT
NULL
column) could cause data nodes to fail. When the
statement was executed from the binlog, this could also result
in failure of the slave cluster.
(Bug#34118)
Disk Data: Updating in-memory columns of one or more rows of Disk Data table, followed by deletion of these rows and re-insertion of them, caused data node failures. (Bug#33619)
Changes in MySQL Cluster NDB 6.2.11 (5.1.23-ndb-6.2.11)
Functionality added or changed:
Cluster API: Important Change:
Because NDB_LE_MemoryUsage.page_size_kb
shows
memory page sizes in bytes rather than kilobytes, it has been
renamed to page_size_bytes
. The name
page_size_kb
is now deprecated and thus
subject to removal in a future release, although it currently
remains supported for reasons of backward compatibility. See
The Ndb_logevent_type
Type, for more information
about NDB_LE_MemoryUsage
.
(Bug#30271)
Bugs fixed:
High numbers of insert operations, delete operations, or both
could cause NDB
error 899
(Rowid already allocated) to occur
unnecessarily.
(Bug#34033)
A periodic failure to flush the send buffer by the
NDB
TCP transporter could cause a
unnecessary delay of 10 ms between operations.
(Bug#34005)
A race condition could occur (very rarely) when the release of a GCI was followed by a data node failure. (Bug#33793)
Some tuple scans caused the wrong memory page to be accessed, leading to invalid results. This issue could affect both in-memory and Disk Data tables. (Bug#33739)
The server failed to reject properly the creation of an
NDB
table having an unindexed
AUTO_INCREMENT
column.
(Bug#30417)
Issuing an
INSERT ...
ON DUPLICATE KEY UPDATE
concurrently with or following
a TRUNCATE TABLE
statement on an
NDB
table failed with
NDB
error 4350
Transaction already aborted.
(Bug#29851)
The Cluster backup process could not detect when there was no more disk space and instead continued to run until killed manually. Now the backup fails with an appropriate error when disk space is exhausted. (Bug#28647)
It was possible in config.ini
to define
cluster nodes having node IDs greater than the maximum allowed
value.
(Bug#28298)
Cluster API:
Transactions containing inserts or reads would hang during
NdbTransaction::execute()
calls made from NDB
API applications built against a MySQL Cluster version that did
not support micro-GCPs accessing a later version that supported
micro-GCPs. This issue was observed while upgrading from MySQL
Cluster NDB 6.1.23 to MySQL Cluster NDB 6.2.10 when the API
application built against the earlier version attempted to
access a data node already running the later version, even after
disabling micro-GCPs by setting
TimeBetweenEpochs
equal to 0.
(Bug#33895)
Cluster API:
When reading a BIT(64)
value using
NdbOperation:getValue()
, 12 bytes were
written to the buffer rather than the expected 8 bytes.
(Bug#33750)
Changes in MySQL Cluster NDB 6.2.10 (5.1.23-ndb-6.2.10)
Bugs fixed:
Partitioning:
When partition pruning on an NDB
table resulted in an ordered index scan spanning only one
partition, any descending flag for the scan was wrongly
discarded, causing ORDER BY DESC
to be
treated as ORDER BY ASC
,
MAX()
to be handled incorrectly, and similar
problems.
(Bug#33061)
When all data and SQL nodes in the cluster were shut down
abnormally (that is, other than by using STOP
in the cluster management client), ndb_mgm
used excessive amounts of CPU.
(Bug#33237)
When using micro-GCPs, if a node failed while preparing for a global checkpoint, the master node would use the wrong GCI. (Bug#32922)
Under some conditions, performing an ALTER
TABLE
on an NDBCLUSTER
table failed with a Table is full error,
even when only 25% of DataMemory
was in use
and the result should have been a table using less memory (for
example, changing a VARCHAR(100)
column to
VARCHAR(80)
).
(Bug#32670)
Changes in MySQL Cluster NDB 6.2.9 (5.1.22-ndb-6.2.9)
Functionality added or changed:
Added the ndb_mgm client command
DUMP 8011
, which dumps all subscribers to the
cluster log. See
DUMP 8011
, for more
information.
Bugs fixed:
A local checkpoint could sometimes be started before the previous LCP was restorable from a global checkpoint. (Bug#32519)
High numbers of API nodes on a slow or congested network could cause connection negotiation to time out prematurely, leading to the following issues:
Excessive retries
Excessive CPU usage
Partially connected API nodes
The failure of a master node could lead to subsequent failures in local checkpointing. (Bug#32160)
Adding a new TINYTEXT
column to
an NDB
table which used
COLUMN_FORMAT = DYNAMIC
, and when binary
logging was enabled, caused all cluster
mysqld processes to crash.
(Bug#30213)
After adding a new column of one of the
TEXT
or
BLOB
types to an
NDB
table which used
COLUMN_FORMAT = DYNAMIC
, it was no longer
possible to access or drop the table using SQL.
(Bug#30205)
A restart of the cluster failed when more than 1 REDO phase was in use. (Bug#22696)
Changes in MySQL Cluster NDB 6.2.8 (5.1.22-ndb-6.2.8)
Functionality added or changed:
The output of the ndb_mgm client
SHOW
and STATUS
commands
now indicates when the cluster is in single user mode.
(Bug#27999)
Bugs fixed:
In a cluster running in diskless mode and with arbitration disabled, the failure of a data node during an insert operation caused other data node to fail. (Bug#31980)
An insert or update with combined range and equality constraints
failed when run against an NDB
table with the error Got unknown error from
NDB. An example of such a statement would be
UPDATE t1 SET b = 5 WHERE a IN (7,8) OR a >=
10;
.
(Bug#31874)
An error with an if
statement in
sql/ha_ndbcluster.cc
could potentially lead
to an infinite loop in case of failure when working with
AUTO_INCREMENT
columns in
NDB
tables.
(Bug#31810)
The NDB
storage engine code was not
safe for strict-alias optimization in gcc
4.2.1.
(Bug#31761)
Following an upgrade, ndb_mgmd would fail with an ArbitrationError. (Bug#31690)
The NDB
management client command
provided no output when
node_id
REPORT
MEMORYnode_id
was the node ID of a
management or API node. Now, when this occurs, the management
client responds with Node
.
(Bug#29485)node_id
: is not a data
node
Performing DELETE
operations
after a data node had been shut down could lead to inconsistent
data following a restart of the node.
(Bug#26450)
UPDATE IGNORE
could sometimes fail on
NDB
tables due to the use of
unitialized data when checking for duplicate keys to be ignored.
(Bug#25817)
Changes in MySQL Cluster NDB 6.2.7 (5.1.22-ndb-6.2.7)
Bugs fixed:
It was possible in some cases for a node group to be “lost” due to missed local checkpoints following a system restart. (Bug#31525)
NDB
tables having names containing
nonalphanumeric characters (such as
“$
”) were not discovered
correctly.
(Bug#31470)
A node failure during a local checkpoint could lead to a subsequent failure of the cluster during a system restart. (Bug#31257)
A cluster restart could sometimes fail due to an issue with table IDs. (Bug#30975)
Transaction timeouts were not handled well in some circumstances, leading to excessive number of transactions being aborted unnecessarily. (Bug#30379)
In some cases, the cluster managment server logged entries multiple times following a restart of mgmd. (Bug#29565)
ndb_mgm --help
did not
display any information about the -a
option.
(Bug#29509)
The cluster log was formatted inconsistently and contained extraneous newline characters. (Bug#25064)
Changes in MySQL Cluster NDB 6.2.6 (5.1.22-ndb-6.2.6)
Functionality added or changed:
Mapping of NDB
error codes to MySQL
storage engine error codes has been improved.
(Bug#28423)
Bugs fixed:
Partitioning:
EXPLAIN
PARTITIONS
reported partition usage by queries on
NDB
tables according to the
standard MySQL hash function than the hash function used in the
NDB
storage engine.
(Bug#29550)
When an NDB
event was left behind
but the corresponding table was later recreated and received a
new table ID, the event could not be dropped.
(Bug#30877)
Attempting to restore a backup made on a cluster host using one endian to a machine using the other endian could cause the cluster to fail. (Bug#29674)
The description of the --print
option provided
in the output from ndb_restore --help
was incorrect.
(Bug#27683)
Restoring a backup made on a cluster host using one endian to a
machine using the other endian failed for
BLOB
and
DATETIME
columns.
(Bug#27543, Bug#30024)
An insufficiently descriptive and potentially misleading Error 4006 (Connect failure - out of connection objects...) was produced when either of the following two conditions occurred:
There were no more transaction records in the transaction coordinator
An NDB
object in the NDB API
was initialized with insufficient parallelism
Separate error messages are now generated for each of these two cases. (Bug#11313)
Changes in MySQL Cluster NDB 6.2.5 (5.1.22-ndb-6.2.5)
Functionality added or changed:
The following improvements have been made in the ndb_size.pl utility:
The script can now be used with multiple databases; lists of databases and tables can also be excluded from analysis.
Schema name information has been added to index table calculations.
The database name is now an optional parameter, the exclusion of which causes all databases to be examined.
If selecting from INFORMATION_SCHEMA
fails, the script now attempts to fall back to
SHOW TABLES
.
A --real_table_name
option has been added;
this designates a table to handle unique index size
calculations.
The report title has been amended to cover cases where more than one database is being analyzed.
Support for a --socket
option was also added.
For more information, see Section 17.4.21, “ndb_size.pl — NDBCLUSTER Size Requirement Estimator”. (Bug#28683, Bug#28253)
Online ADD COLUMN
, ADD
INDEX
, and DROP INDEX
operations can now be performed explicitly for
NDB
tables, as well as online
renaming of tables and columns for
NDB
and MyISAM
tables — that is, without copying or locking of the
affected tables — using ALTER ONLINE
TABLE
.
Indexes can also be created and dropped online using
CREATE INDEX
and
DROP INDEX
, respectively, using
the ONLINE
keyword.
You can force operations that would otherwise be performed
online to be done offline using the OFFLINE
keyword.
See Section 12.1.7, “ALTER TABLE
Syntax”,
Section 12.1.13, “CREATE INDEX
Syntax”, and
Section 12.1.24, “DROP INDEX
Syntax”, for more information.
It is now possible to control whether fixed-width or
variable-width storage is used for a given column of an
NDB
table by means of the
COLUMN_FORMAT
specifier as part of the
column's definition in a CREATE
TABLE
or ALTER TABLE
statement.
It is also possible to control whether a given column of an
NDB
table is stored in memory or on
disk, using the STORAGE
specifier as part of
the column's definition in a CREATE
TABLE
or ALTER TABLE
statement.
For permitted values and other information about
COLUMN_FORMAT
and STORAGE
,
see Section 12.1.17, “CREATE TABLE
Syntax”.
A new cluster management server startup option
--bind-address
makes it possible
to restrict management client connections to
ndb_mgmd to a single host and port. For more
information, see
Section 17.4.4, “ndb_mgmd — The MySQL Cluster Management Server Daemon”.
Bugs fixed:
When handling BLOB
columns, the
addition of read locks to the lock queue was not handled
correctly.
(Bug#30764)
Discovery of NDB
tables did not
work correctly with INFORMATION_SCHEMA
.
(Bug#30667)
A file system close operation could fail during a node or system restart. (Bug#30646)
Using the --ndb-cluster-connection-pool
option
for mysqld caused DDL statements to be
executed twice.
(Bug#30598)
When creating an NDB
table with a column that
has COLUMN_FORMAT = DYNAMIC
, but the table
tiself uses ROW_FORMAT=FIXED
, the table is
considered dynamic, but any columns for which the row format is
unspecified default to FIXED
. Now in such
cases the server issues the warning Row format FIXED
incompatible with dynamic attribute
column_name
.
(Bug#30276)
ndb_size.pl failed on tables with
FLOAT
columns whose definitions
included commas (for example, FLOAT(6,2)
).
(Bug#29228)
Reads on BLOB
columns were not
locked when they needed to be to guarantee consistency.
(Bug#29102)
See also Bug#31482.
A query using joins between several large tables and requiring
unique index lookups failed to complete, eventually returning
Uknown Error after a very long period of
time. This occurred due to inadequate handling of instances
where the Transaction Coordinator ran out of
TransactionBufferMemory
, when the cluster
should have returned NDB error code 4012 (Request
ndbd time-out).
(Bug#28804)
An attempt to perform a SELECT ... FROM
INFORMATION_SCHEMA.TABLES
whose result included
information about NDB
tables for
which the user had no privileges crashed the MySQL Server on
which the query was performed.
(Bug#26793)
Cluster API:
A call to CHECK_TIMEDOUT_RET()
in
mgmapi.cpp
should have been a call to
DBUG_CHECK_TIMEDOUT_RET()
.
(Bug#30681)
Changes in MySQL Cluster NDB 6.2.4 (5.1.19-ndb-6.2.4)
Bugs fixed:
When restarting a data node, queries could hang during that node's start phase 5, and continue only after the node had entered phase 6. (Bug#29364)
Replica redo logs were inconsistently handled during a system restart. (Bug#29354)
Disk Data: Performing Disk Data schema operations during a node restart could cause forced shutdowns of other data nodes. (Bug#29501)
Disk Data: Disk data meta-information that existed in ndbd might not be visible to mysqld. (Bug#28720)
Disk Data: The number of free extents was incorrectly reported for some tablespaces. (Bug#28642)
Changes in MySQL Cluster NDB 6.2.3 (5.1.19-ndb-6.2.3)
Functionality added or changed:
Important Change:
The TimeBetweenWatchdogCheckInitial
configuration parameter was added to allow setting of a separate
watchdog timeout for memory allocation during startup of the
data nodes. See Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”,
for more information.
(Bug#28899)
Cluster API: Important Change:
A new NdbRecord
object has been added to the
NDB
API. This object provides
mapping to a record stored in NDB
.
See The NdbRecord
Interface, for more information.
auto_increment_increment
and
auto_increment_offset
are now
supported for NDB
tables.
(Bug#26342)
A REPORT BackupStatus
command has been added
in the cluster management client. This command allows you to
obtain a backup status report at any time during a backup. For
more about this command, see
Section 17.5.2, “Commands in the MySQL Cluster Management Client”.
Reporting functionality has been significantly enhanced in this release:
A new configuration parameter
BackupReportFrequency
now makes it
possible to cause the management client to provide status
reports at regular intervals as well as for such reports
to be written to the cluster log (depending on cluster
event logging levels). See
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”, for more
information about this parameter.
A new REPORT
command has been added in
the cluster management client. REPORT
BackupStatus
allows you to obtain a backup
status report at any time during a backup. REPORT
MemoryUsage
reports the current data memory and
index memory used by each data node. For more about the
REPORT
command, see
Section 17.5.2, “Commands in the MySQL Cluster Management Client”.
ndb_restore now provides running reports of its progress when restoring a backup. In addition, a complete report status report on the backup is written to the cluster log.
A new configuration parameter ODirect
causes
NDB
to attempt using
O_DIRECT
writes for LCP, backups, and redo
logs, often lowering CPU usage.
ndb_restore now provides running reports of its progress when restoring a backup. In addition, a complete report status report on the backup is written to the cluster log.
A new configuration parameter
BackupReportFrequency
now makes it possible
to cause the management client to provide status reports at
regular intervals as well as for such reports to be written to
the cluster log (depending on cluster event logging levels). See
Section 17.3.2.6, “Defining MySQL Cluster Data Nodes”, for more
information about this parameter.
A new memory allocator has been implemented for the
NDB
kernel, which allocates memory
to tables 32K page by 32K page rather than allocating it in
variable-sized chunks as previously. This removes much of the
memory overhead that was associated with the old memory
allocator.
Bugs fixed:
When a node failed to respond to a COPY_GCI
signal as part of a global checkpoint, the master node was
killed instead of the node that actually failed.
(Bug#29331)
Memory corruption could occur due to a problem in the
DBTUP
kernel block.
(Bug#29229)
A query having a large IN(...)
or
NOT IN(...)
list in the
WHERE
condition on an
NDB
table could cause
mysqld to crash.
(Bug#29185)
In the event that two data nodes in the same node group and
participating in a GCP crashed before they had written their
respective P0.sysfile
files,
QMGR
could refuse to start, issuing an
invalid Insufficient nodes for restart
error instead.
(Bug#29167)
An invalid comparison made during REDO
validation that could lead to an Error while reading
REDO log condition.
(Bug#29118)
Attempting to restore a NULL
row to a
VARBINARY
column caused
ndb_restore to fail.
(Bug#29103)
ndb_error_reporter now preserves timestamps on files. (Bug#29074)
The wrong data pages were sometimes invalidated following a global checkpoint. (Bug#29067)
If at least 2 files were involved in REDO
invalidation, then file 0 of page 0 was not updated and so
pointed to an invalid part of the redo log.
(Bug#29057)
It is now possible to set the maximum size of the allocation
unit for table memory using the MaxAllocate
configuration parameter.
(Bug#29044)
When shutting down mysqld, the
NDB
binlog process was not shut
down before log cleanup began.
(Bug#28949)
A corrupt schema file could cause a File already open error. (Bug#28770)
Having large amounts of memory locked caused swapping to disk. (Bug#28751)
Setting InitialNoOpenFiles
equal to
MaxNoOfOpenFiles
caused an error. This was
due to the fact that the actual value of
MaxNoOfOpenFiles
as used by the cluster was
offset by 1 from the value set in
config.ini
.
(Bug#28749)
LCP files were not removed following an initial system restart. (Bug#28726)
UPDATE IGNORE
statements involving the
primary keys of multiple tables could result in data corruption.
(Bug#28719)
A race condition could result when nonmaster nodes (in addition
to the master node) tried to update active status due to a local
checkpoint (that is, between NODE_FAILREP
and
COPY_GCIREQ
events). Now only the master
updates the active status.
(Bug#28717)
A fast global checkpoint under high load with high usage of the redo buffer caused data nodes to fail. (Bug#28653)
The management client's response to START BACKUP
WAIT COMPLETED
did not include the backup ID.
(Bug#27640)
Disk Data: When dropping a page, the stack's bottom entry could sometime be left “cold” rather than “hot”, violating the rules for stack pruning. (Bug#29176)
Disk Data:
When loading data into a cluster following a version upgrade,
the data nodes could forcibly shut down due to page and buffer
management failures (that is, ndbrequire
failures in PGMAN
).
(Bug#28525)
Disk Data:
Repeated INSERT
and
DELETE
operations on a Disk Data
table having one or more large
VARCHAR
columns could cause data
nodes to fail.
(Bug#20612)
Cluster API:
The timeout set using the MGM API
ndb_mgm_set_timeout()
function was
incorrectly interpreted as seconds rather than as milliseconds.
(Bug#29063)
Cluster API:
An invalid error code could be set on transaction objects by
BLOB
handling code.
(Bug#28724)
Changes in MySQL Cluster NDB 6.2.2 (5.1.18-ndb-6.2.2)
Functionality added or changed:
New cluster management client DUMP
commands
were added to aid in tracking transactions, scan operations, and
locks. See DUMP 2350
,
DUMP 2352
, and
DUMP 2550
, for more
information.
Added the mysqld option
--ndb-cluster-connection-pool
that allows a
single MySQL server to use multiple connections to the cluster.
This allows for scaling out using multiple MySQL clients per SQL
node instead of or in addition to using multiple SQL nodes with
the cluster.
For more information about this option, see Section 17.3.4, “MySQL Server Options and Variables for MySQL Cluster”.
Changes in MySQL Cluster NDB 6.2.1 (5.1.18-ndb-6.2.1)
Bugs fixed:
Multiple operations involving deletes followed by reads were not handled correctly.
This issue could also affect MySQL Cluster Replication.
Cluster API:
Using NdbBlob::writeData()
to write data in
the middle of an existing blob value (that is, updating the
value) could overwrite some data past the end of the data to be
changed.
(Bug#27018)
Changes in MySQL Cluster NDB 6.2.0 (5.1.16-ndb-6.2.0)
Functionality added or changed:
An ndb_wait_connected
system
variable has been added for mysqld. It causes
mysqld wait a specified amount of time to be
connected to the cluster before accepting client connections.
For more information, see
Section 17.3.4.3, “MySQL Cluster System Variables”.
Cluster API:
The Ndb::startTransaction()
method now
provides an alternative interface for starting a transaction.
See Ndb::startTransaction()
, for more
information.
Cluster API:
Methods were added to the
Ndb_cluster_connection
class to faciliate
iterating over existing NDB
objects. See
ndb_cluster_connection::get_next_ndb_object()
,
for more information.
User Comments
Add your own comment.