Multiple SQL nodes. 
          The following are issues relating to the use of multiple MySQL
          servers as MySQL Cluster SQL nodes, and are specific to the
          NDBCLUSTER storage engine:
        
No distributed table locks. 
              A LOCK TABLES works only
              for the SQL node on which the lock is issued; no other SQL
              node in the cluster “sees” this lock. This is
              also true for a lock issued by any statement that locks
              tables as part of its operations. (See next item for an
              example.)
            
ALTER TABLE operations. 
              ALTER TABLE is not fully
              locking when running multiple MySQL servers (SQL nodes).
              (As discussed in the previous item, MySQL Cluster does not
              support distributed table locks.)
            
Multiple management nodes. When using multiple management servers:
You must give nodes explicit IDs in connectstrings because automatic allocation of node IDs does not work across multiple management servers.
You must take extreme care to have the same configurations for all management servers. No special checks for this are performed by the cluster.
Multiple network addresses. Multiple network addresses per data node are not supported. Use of these is liable to cause problems: In the event of a data node failure, an SQL node waits for confirmation that the data node went down but never receives it because another route to that data node remains open. This can effectively make the cluster inoperable.
          It is possible to use multiple network hardware
          interfaces (such as Ethernet cards) for a
          single data node, but these must be bound to the same address.
          This also means that it not possible to use more than one
          [tcp] section per connection in the
          config.ini file. See
          Section 17.3.2.8, “MySQL Cluster TCP/IP Connections”, for more
          information.
        


User Comments
Add your own comment.