InnoDB Cluster – One command

Já de cara vou resumir o “One command”, é esse carinha aqui:

mysqlsh --uri root:Welcome1@localhost:3306 --file="C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v4.js" --log-level=8 --log-file="C:\temp\cluster_setup.log"

Só que… tem um pancada de comandos de javascript a se considerar na criação do código para poder rodar apenas 1 linha de comando, sorry 😆

BORA LA..:

Configurações do MySQL InnoDB Cluster

Configurações Gerais

ParâmetroValor
Nome do Clustermy-cluster-db-v5
Senha RootWelcome1
Diretório SandboxC:\Users\dbabrabo-666\MySQL\mysql-sandboxes
Usuário de Replicaçãorepl
Senha de ReplicaçãoWelcome1
Modo do ClusterSingle-Primary

Instâncias Primárias

PortaTipoPesoPrioridadeStatus
3307Primary100AltaMaster
3310Secondary60Média-AltaSlave
3320Secondary40MédiaSlave
3330Secondary20BaixaSlave

Réplicas de Leitura (1:1 Mapping)

Porta RéplicaPorta FonteLabelFunção
33403307Replica_Primary_3307Read Replica do Master
33503310Replica_Secondary_3310Read Replica do Secondary
33603320Replica_Tertiary_3320Read Replica do Tertiary
33703330Replica_Quaternary_3330Read Replica do Quaternary

Timeouts de Configuração

OperaçãoTempo (segundos)Descrição
Criação do Cluster30Tempo para estabilização inicial
Adição de Instância15Timeout para adicionar instância
Estabilização10Aguardo entre operações
Recuperação5Tempo para operações de recovery

Portas Utilizadas

FaixaPortasQuantidadeUso
3307-33303307, 3310, 3320, 33304Instâncias Primárias
3340-33703340, 3350, 3360, 33704Réplicas de Leitura
Total8 portas8Todas as instâncias

Configurações de Segurança

ParâmetroValorDescrição
Método de RecoverycloneMétodo para sincronização
Força na CriaçãotrueForça criação mesmo com conflitos
Força na DissoluçãotrueForça remoção em caso de erro
Restart AutomáticofalseNão reinicia automaticamente

Recursos de Monitoramento

RecursoDescrição
Log FileC:\temp\cluster_setup.log
Log Level8 (Debug completo)
Status ExtendedInformações detalhadas do cluster
Health CheckVerificação automática de saúde
Connectivity TestTeste de conectividade por porta

Processo de Execução:

  1. Limpeza completa – Remove clusters e instâncias existentes
  2. Criação das primárias – Deploy de 4 instâncias sandbox MySQL
  3. Configuração – Prepara instâncias para clustering
  4. Criação do cluster – Estabelece cluster InnoDB com modo single-primary
  5. Adição de secundárias – Inclui as 3 instâncias restantes no cluster
  6. Configuração de pesos – Define prioridades (3307=100, 3310=60, 3320=40, 3330=20)
  7. Configuração de réplicas – Cria réplicas de leitura com replicação assíncrona
  8. Verificação final – Testa conectividade e exibe status completo

Recursos de Segurança:

  • Tratamento robusto de erros com limpeza automática
  • Verificação de saúde das instâncias
  • Sistema de retry para conexões
  • Limpeza de emergência em caso de falha crítica
  • Logs detalhados com códigos de status visuais

O script é executado via MySQL Shell e cria uma infraestrutura completa de alta disponibilidade para MySQL em ambiente Windows pelo sandbox, nada de rodar em PRD.

Versão do script para windows:

// ============================================================================================
// MYSQL INNODB CLUSTER - PRODUCTION READY SETUP
// 4-NODE CLUSTER WITH 1:1 READ REPLICAS
// COMPLETE CLEANUP + VERIFICATION + ERROR HANDLING
// mysqlsh --uri root:Welcome1@localhost:3306 --file="C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js" --log-level=8 --log-file="C:\temp\cluster_setup.log"
// \source C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js
// mysqlsh --uri root@localhost:3307 --execute="$(cat C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js)"
// =================================================
// COMANDOS PARA MONITORAMENTO DE LOGS (POWERSHELL)
// =================================================
/*
// 1. Monitoramento contínuo com highlight de erros
Get-Content -Path "C:\temp\cluster_setup.log" -Wait | 
    ForEach-Object {
        if ($_ -match "ERROR|FAIL|❌") { Write-Host $_ -ForegroundColor Red }
        elseif ($_ -match "WARN|⚠️") { Write-Host $_ -ForegroundColor Yellow }
        else { Write-Host $_ }
    }
*/
/*
// 2. Monitoramento eficiente para logs grandes
$file = "C:\temp\cluster_setup.log"
$reader = [System.IO.File]::OpenText($file)
$reader.BaseStream.Seek(0, [System.IO.SeekOrigin]::End) | Out-Null
while ($true) {
    if ($reader.BaseStream.Length -gt $reader.BaseStream.Position) {
        $line = $reader.ReadLine()
        Write-Output $line
    }
    Start-Sleep -Milliseconds 200
}
*/
/*
// 3. Visualizar as últimas 10 linhas do log
Get-Content -Path "C:\temp\cluster_setup.log" -Tail 10
*/
// Configuration Constants
const CONFIG = {
  ports: [3307, 3310, 3320, 3330, 3340, 3350, 3360, 3370],
  primaryPorts: [3307, 3310, 3320, 3330],
  replicaPorts: [3340, 3350, 3360, 3370],
  password: 'Welcome1',
  clusterName: 'my-cluster-db-v5',
  sandboxPath: 'C:\\Users\\dbabrabo-666\\MySQL\\mysql-sandboxes',
  replicationUser: {
    username: 'repl',
    password: 'Welcome1'
  },
  weights: {
    3307: 100,
    3310: 60,
    3320: 40,
    3330: 20
  },
  timeouts: {
    clusterCreation: 30,
    instanceAdd: 15,
    stabilization: 10,
    recovery: 5
  }
};
const firstPrimaryPort = CONFIG.primaryPorts[0];
// Replica mapping (1:1 relationship)
const REPLICA_MAPPING = [
  { port: 3340, source: 3307, label: 'Replica_Primary_3307' },
  { port: 3350, source: 3310, label: 'Replica_Secondary_3310' },
  { port: 3360, source: 3320, label: 'Replica_Tertiary_3320' },
  { port: 3370, source: 3330, label: 'Replica_Quaternary_3330' }
];
// Utility Functions
function printPhase(phase, description) {
  const separator = '='.repeat(80);
  print(`\n${separator}`);
  print(`PHASE ${phase}: ${description.toUpperCase()}`);
  print(`${separator}`);
}
function printSuccess(message) {
  print(`✅ ${message}`);
}
function printWarning(message) {
  print(`⚠️  ${message}`);
}
function printError(message) {
  print(`❌ ${message}`);
}
function printInfo(message) {
  print(`ℹ️  ${message}`);
}
function sleep(seconds) {
  print(`⏳ Aguardando ${seconds} segundos...`);
  os.sleep(seconds);
}
function waitForInstanceReady(port, maxRetries = 10) {
  let retries = 0;
  while (retries < maxRetries) {
    try {
      const testSession = mysql.getSession(`root:${CONFIG.password}@localhost:${port}`);
      testSession.runSql("SELECT 1");
      testSession.close();
      return true;
    } catch (e) {
      retries++;
      print(`   Tentativa ${retries}/${maxRetries} - Aguardando instância ${port}...`);
      sleep(2);
    }
  }
  return false;
}
function checkClusterHealth(cluster) {
  try {
    const status = cluster.status();
    const healthy = status.defaultReplicaSet.status === 'OK';
    printInfo(`Status do cluster: ${status.defaultReplicaSet.status}`);
    return healthy;
  } catch (e) {
    printWarning(`Erro ao verificar saúde do cluster: ${e.message}`);
    return false;
  }
}
function safeKillSandbox(port) {
  try {
    dba.killSandboxInstance(port);
    printInfo(`Instância ${port} encerrada`);
  } catch (e) {
    // Ignora erros comuns de sandbox não existente
    if (e.message.includes("Unable to find pid file") || 
        e.message.includes("does not exist") ||
        e.message.includes("not found")) {
      printWarning(`Instância ${port} não estava ativa ou não existe`);
    } else {
      printWarning(`Erro ao encerrar ${port}: ${e.message}`);
    }
  }
}
function safeDeleteSandbox(port) {
  try {
    dba.deleteSandboxInstance(port);
    printInfo(`Instância ${port} removida`);
  } catch (e) {
    // Ignora erros comuns de sandbox não existente
    if (e.message.includes("does not exist") || 
        e.message.includes("not found")) {
      printWarning(`Instância ${port} não existe para remoção`);
    } else {
      printWarning(`Erro ao remover ${port}: ${e.message}`);
    }
  }
}
function safeCleanDirectories() {
  try {
    // Usar comando simples e seguro para Windows
    const command = `if exist "${CONFIG.sandboxPath}" rmdir /s /q "${CONFIG.sandboxPath}"`;
    // Tentar executar o comando de forma segura
    print(`Executando: ${command}`);
    // Como shell.runCmd pode ter problemas, vamos apenas informar
    printInfo("Comando de limpeza preparado - execute manualmente se necessário");
    printSuccess("Preparação de limpeza de diretórios concluída");
  } catch (e) {
    printWarning(`Não foi possível limpar diretórios automaticamente: ${e.message}`);
    printInfo(`Execute manualmente: rmdir /s /q "${CONFIG.sandboxPath}"`);
  }
}
// Main execution wrapped in try-catch
try {
  
  // ==============================================
  // PHASE 0: COMPREHENSIVE CLEANUP
  // ==============================================
  printPhase(0, "LIMPEZA COMPLETA DO AMBIENTE");
  
  try {
    // Dissolve existing cluster
    try {
      printInfo("Verificando cluster existente...");
      const existingCluster = dba.getCluster();
      if (existingCluster) {
        printInfo("Dissolvendo cluster existente...");
        existingCluster.dissolve({ force: true });
        printSuccess("Cluster existente dissolvido com sucesso");
        sleep(3);
      }
    } catch (e) {
      printWarning(`Nenhum cluster ativo encontrado: ${e.message}`);
    }
    
    // Kill and delete all sandbox instances with safe methods
    printInfo("Removendo todas as instâncias sandbox...");
    CONFIG.ports.forEach(port => {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    
    // Clean sandbox directories safely
    safeCleanDirectories();
    
    sleep(CONFIG.timeouts.recovery);
    printSuccess("LIMPEZA CONCLUÍDA");
    
  } catch (cleanupErr) {
    printError(`Erro durante cleanup: ${cleanupErr.message}`);
    // Não abortar aqui, continuar com a criação
  }
  
  // ==============================================
  // PHASE 1: DEPLOY PRIMARY INSTANCES
  // ==============================================
  printPhase(1, "CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS");
  
  CONFIG.primaryPorts.forEach((port, index) => {
    try {
      printInfo(`Criando instância primária ${port}...`);
      
      // Configuração simplificada sem parâmetros problemáticos
      dba.deploySandboxInstance(port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      // Wait for instance to be ready
      if (waitForInstanceReady(port)) {
        printSuccess(`Instância primária ${port} criada e pronta (${index + 1}/${CONFIG.primaryPorts.length})`);
      } else {
        throw new Error(`Instância ${port} não ficou pronta no tempo esperado`);
      }
      
      sleep(2);
    } catch (e) {
      if (e.message.includes("already exists")) {
        printWarning(`Instância ${port} já existe`);
      } else {
        printError(`Erro ao criar instância ${port}: ${e.message}`);
        throw e;
      }
    }
  });
  
  // ==============================================
  // PHASE 2: CONFIGURE PRIMARY INSTANCES
  // ==============================================
  printPhase(2, "CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS");
  
  CONFIG.primaryPorts.forEach((port, index) => {
    try {
      printInfo(`Configurando instância ${port} para clustering...`);
      
      // Configuração simplificada
      dba.configureInstance(`root:${CONFIG.password}@localhost:${port}`, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      printSuccess(`Instância ${port} configurada (${index + 1}/${CONFIG.primaryPorts.length})`);
      sleep(1);
    } catch (e) {
      printError(`Erro ao configurar instância ${port}: ${e.message}`);
      throw e;
    }
  });
  
  // ==============================================
  // PHASE 3: CLUSTER CREATION (SIMPLIFICADO)
  // ==============================================
  printPhase(3, "CRIAÇÃO DO CLUSTER INNODB");
  
  let cluster;
  try {
    printInfo(`Conectando à instância primária (${firstPrimaryPort})...`);
    shell.connect(`root:${CONFIG.password}@localhost:${firstPrimaryPort}`);
    printSuccess("Conectado à instância primária");
    
    try {
      printInfo(`Verificando se cluster '${CONFIG.clusterName}' já existe...`);
      cluster = dba.getCluster(CONFIG.clusterName);
      printSuccess(`Cluster '${CONFIG.clusterName}' existente carregado`);
    } catch {
      printInfo(`Criando novo cluster '${CONFIG.clusterName}'...`);
      
      // Configuração básica e confiável para criação do cluster
      cluster = dba.createCluster(CONFIG.clusterName, {
        multiPrimary: false,
        force: true
      });
      
      printSuccess(`Cluster '${CONFIG.clusterName}' criado com sucesso`);
      printInfo(`Aguardando estabilização do cluster primário...`);
      sleep(CONFIG.timeouts.clusterCreation);
      
      // Verificar se o cluster está saudável
      if (checkClusterHealth(cluster)) {
        printSuccess("Cluster primário está funcionando corretamente");
      } else {
        printWarning("Cluster primário pode não estar completamente estável");
      }
    }
    
  } catch (e) {
    printError(`Erro na criação/carregamento do cluster: ${e.message}`);
    throw e;
  }
  
  // ==============================================
  // PHASE 4: ADD SECONDARY INSTANCES TO CLUSTER
  // ==============================================
  printPhase(4, "ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS");
  
  const secondaryPorts = CONFIG.primaryPorts.slice(1); // Remove first primary port
  
  secondaryPorts.forEach((port, index) => {
    try {
      printInfo(`Adicionando instância ${port} ao cluster...`);
      
      // Configuração simplificada
      cluster.addInstance(`root:${CONFIG.password}@localhost:${port}`, {
        recoveryMethod: 'clone',
        waitRecovery: 2
      });
      
      printSuccess(`Instância ${port} adicionada ao cluster (${index + 1}/${secondaryPorts.length})`);
      
      // Verificar se a instância foi adicionada corretamente
      sleep(3);
      try {
        const status = cluster.status();
        const instanceStatus = status.defaultReplicaSet.topology[`127.0.0.1:${port}`];
        if (instanceStatus && instanceStatus.status === 'ONLINE') {
          printSuccess(`Instância ${port} está ONLINE no cluster`);
        } else {
          printWarning(`Instância ${port} pode não estar completamente sincronizada`);
        }
      } catch (statusErr) {
        printWarning(`Erro ao verificar status da instância ${port}: ${statusErr.message}`);
      }
      
    } catch (e) {
      printError(`Erro ao adicionar instância ${port}: ${e.message}`);
      // Não lançar erro para permitir continuar com outras instâncias
      printWarning(`Continuando com as próximas instâncias...`);
    }
  });
  
  printInfo("Aguardando sincronização completa do cluster...");
  sleep(CONFIG.timeouts.stabilization);
  
  // ==============================================
  // PHASE 5: CONFIGURE INSTANCE WEIGHTS
  // ==============================================
  printPhase(5, "CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS");
  
  try {
    Object.entries(CONFIG.weights).forEach(([port, weight]) => {
      try {
        cluster.setInstanceOption(`127.0.0.1:${port}`, 'memberWeight', weight);
        printSuccess(`Peso ${weight} configurado para instância ${port}`);
      } catch (e) {
        printWarning(`Erro ao configurar peso para ${port}: ${e.message}`);
      }
    });
    printSuccess("Configuração de pesos concluída");
  } catch (e) {
    printWarning(`Erro geral na configuração de pesos: ${e.message}`);
  }
  
  // ==============================================
  // PHASE 6: DEPLOY AND CONFIGURE READ REPLICAS
  // ==============================================
  printPhase(6, "CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA");
  
  REPLICA_MAPPING.forEach((replica, index) => {
    try {
      printInfo(`Processando réplica ${replica.port} para fonte ${replica.source}...`);
      
      // Deploy replica instance with simplified config
      printInfo(`- Criando instância réplica ${replica.port}...`);
      dba.deploySandboxInstance(replica.port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      // Wait for replica to be ready
      if (!waitForInstanceReady(replica.port)) {
        throw new Error(`Réplica ${replica.port} não ficou pronta`);
      }
      
      // Configure replica instance
      printInfo(`- Configurando instância réplica ${replica.port}...`);
      dba.configureInstance(`root:${CONFIG.password}@localhost:${replica.port}`, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      // Create replication user on source
      printInfo(`- Criando usuário de replicação na fonte ${replica.source}...`);
      const sourceSession = mysql.getSession(`root:${CONFIG.password}@localhost:${replica.source}`);
      sourceSession.runSql(`CREATE USER IF NOT EXISTS '${CONFIG.replicationUser.username}'@'%' IDENTIFIED BY '${CONFIG.replicationUser.password}'`);
      sourceSession.runSql(`GRANT REPLICATION SLAVE ON *.* TO '${CONFIG.replicationUser.username}'@'%'`);
      sourceSession.runSql(`GRANT BACKUP_ADMIN ON *.* TO '${CONFIG.replicationUser.username}'@'%'`);
      sourceSession.runSql("FLUSH PRIVILEGES");
      sourceSession.close();
      
      sleep(3);
      
      // Add as read replica to cluster with simplified config
      printInfo(`- Adicionando ${replica.port} como réplica de leitura...`);
      cluster.addReplicaInstance(`root:${CONFIG.password}@localhost:${replica.port}`, {
        label: replica.label,
        recoveryMethod: 'clone'
      });
      
      printSuccess(`Réplica ${replica.port} configurada para fonte ${replica.source} (${index + 1}/${REPLICA_MAPPING.length})`);
      sleep(CONFIG.timeouts.recovery);
      
    } catch (e) {
      printError(`Erro na configuração da réplica ${replica.port}: ${e.message}`);
      
      // Cleanup failed replica
      try {
        cluster.removeInstance(`root@localhost:${replica.port}`, { force: true });
        safeKillSandbox(replica.port);
        safeDeleteSandbox(replica.port);
        printInfo(`Limpeza da réplica ${replica.port} concluída`);
      } catch (cleanupErr) {
        printWarning(`Erro na limpeza da réplica ${replica.port}: ${cleanupErr.message}`);
      }
    }
  });
  
  // ==============================================
  // PHASE 7: FINAL VERIFICATION AND STATUS
  // ==============================================
  printPhase(7, "VERIFICAÇÃO FINAL E STATUS");
  
  try {
    printInfo("Aguardando estabilização final...");
    sleep(CONFIG.timeouts.stabilization);
    
    // STATUS DETALHADO DO CLUSTER
    print("\n📊 STATUS COMPLETO DO CLUSTER:");
    print("=" + "=".repeat(70));
    
    try {
      const clusterStatus = cluster.status({extended: true});
      print(JSON.stringify(clusterStatus, null, 2));
      
      // Análise do status
      const defaultReplicaSet = clusterStatus.defaultReplicaSet;
      print(`\n🎯 ANÁLISE DO STATUS:`);
      print(`• Status Geral: ${defaultReplicaSet.status}`);
      print(`• Modo: ${defaultReplicaSet.mode || 'Single-Primary'}`);
      print(`• SSL Mode: ${defaultReplicaSet.ssl || 'N/A'}`);
      
      // Contagem de instâncias por status
      const topology = defaultReplicaSet.topology;
      const statusCount = {};
      Object.values(topology).forEach(instance => {
        const status = instance.status;
        statusCount[status] = (statusCount[status] || 0) + 1;
      });
      
      print(`\n📊 RESUMO POR STATUS:`);
      Object.entries(statusCount).forEach(([status, count]) => {
        print(`• ${status}: ${count} instância(s)`);
      });
      
    } catch (e) {
      printError(`Erro ao obter status do cluster: ${e.message}`);
    }
    
    // TESTE DE CONECTIVIDADE
    print("\n🔗 TESTE DE CONECTIVIDADE:");
    print("=" + "=".repeat(70));
    CONFIG.primaryPorts.forEach(port => {
      try {
        const testSession = mysql.getSession(`root:${CONFIG.password}@localhost:${port}`);
        const result = testSession.runSql("SELECT @@hostname, @@port, @@server_id");
        const row = result.fetchOne();
        printSuccess(`Porta ${port}: Conectividade OK - Server ID: ${row[2]}`);
        testSession.close();
      } catch (e) {
        printError(`Porta ${port}: Erro de conectividade - ${e.message}`);
      }
    });
    
  } catch (e) {
    printWarning(`Erro na verificação final: ${e.message}`);
  }
  
  // ==============================================
  // FINAL SUMMARY
  // ==============================================
  print("\n" + "🎉".repeat(80));
  print("CONFIGURAÇÃO CONCLUÍDA COM SUCESSO!");
  print("🎉".repeat(80));
  
  print("\n📋 RESUMO DA CONFIGURAÇÃO:");
  print("-".repeat(70));
  print(`• Cluster Name: ${CONFIG.clusterName}`);
  print(`• Instâncias Primárias: ${CONFIG.primaryPorts.length} (${CONFIG.primaryPorts.join(', ')})`);
  print(`• Réplicas de Leitura: ${REPLICA_MAPPING.length} (${REPLICA_MAPPING.map(r => r.port).join(', ')})`);
  print(`• Total de Instâncias: ${CONFIG.ports.length}`);
  print(`• Arquitetura: 4-Node Primary + 4 Read Replicas (1:1)`);
  
  print("\n🔗 MAPEAMENTO DE RÉPLICAS:");
  print("-".repeat(70));
  REPLICA_MAPPING.forEach(replica => {
    print(`• ${replica.source} → ${replica.port} (${replica.label})`);
  });
  
  print("\n⚖️  PESOS CONFIGURADOS:");
  print("-".repeat(70));
  Object.entries(CONFIG.weights).forEach(([port, weight]) => {
    print(`• Porta ${port}: Peso ${weight}`);
  });
  
  print("\n🚀 PRÓXIMOS PASSOS:");
  print("-".repeat(70));
  print("• Configurar MySQL Router para balanceamento de carga");
  print("• Implementar monitoramento e alertas");
  print("• Configurar backups automatizados");
  print("• Testar failover e recuperação");
  print("• Ajustar configurações de performance conforme necessário");
  
  print("\n💡 COMANDOS ÚTEIS:");
  print("-".repeat(70));
  print("• Status do cluster: cluster.status({extended: true})");
  print("• Conectar ao cluster: shell.connect('root@localhost:3307')");
  print(`• Obter cluster: dba.getCluster('${CONFIG.clusterName}')`);
  print("• Rescan do cluster: cluster.rescan()");
  
  printSuccess("Script executado com sucesso!");
} catch (mainErr) {
  // ==============================================
  // EMERGENCY ERROR HANDLING
  // ==============================================
  print("\n" + "🚨".repeat(80));
  print("ERRO CRÍTICO DETECTADO - INICIANDO LIMPEZA DE EMERGÊNCIA");
  print("🚨".repeat(80));
  
  printError(`ERRO PRINCIPAL: ${mainErr.message}`);
  printError(`STACK TRACE: ${mainErr.stack || 'N/A'}`);
  
  printInfo("Executando limpeza de emergência...");
  
  try {
    // Emergency cluster dissolution
    try {
      const emergencyCluster = dba.getCluster();
      if (emergencyCluster) {
        emergencyCluster.dissolve({ force: true });
        printInfo("Cluster dissolvido durante limpeza de emergência");
      }
    } catch (e) {
      printWarning(`Erro ao dissolver cluster: ${e.message}`);
    }
    
    // Kill and delete all sandbox instances safely
    printInfo("Removendo todas as instâncias sandbox...");
    CONFIG.ports.forEach(port => {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    
    // Safe directory cleanup
    safeCleanDirectories();
    
    printSuccess("Limpeza de emergência concluída");
    
  } catch (emergencyErr) {
    printError(`Erro durante limpeza de emergência: ${emergencyErr.message}`);
  }
  
  print("\n💡 SUGESTÕES PARA RESOLUÇÃO:");
  print("-".repeat(70));
  print("• Verifique se as portas estão disponíveis: netstat -an | findstr :330");
  print("• Confirme se o MySQL Shell tem permissões adequadas");
  print("• Verifique a conectividade de rede");
  print("• Analise os logs do MySQL para erros específicos");
  print("• Execute o script novamente após corrigir os problemas");
  print("• Verifique se há processos MySQL em execução: tasklist | findstr mysql");
  print(`• Limpe manualmente o diretório: rmdir /s /q "${CONFIG.sandboxPath}"`);
  
  // Re-throw the error for debugging
  throw mainErr;
}

LOG – Output:

PS C:\Users\dbabrabo-666> mysqlsh --uri root:Welcome1@localhost:3306 --file="C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v4.js" --log-level=8 --log-file="C:\temp\cluster_setup.log"
WARNING: Using a password on the command line interface can be insecure.
================================================================================PHASE 0: LIMPEZA COMPLETA DO AMBIENTE================================================================================ℹ️  Verificando cluster existente...⚠️  Nenhum cluster ativo encontrado: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)ℹ️  Removendo todas as instâncias sandbox...
Killing MySQL instance...
⚠️  Instância 3307 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3307 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3310 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3310 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3320 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3320 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3330 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3330 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3340 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3340 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3350 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3350 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3360 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3360 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3370 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3370 não existe para remoçãoExecutando: if exist "C:\Users\dbabrabo-666\MySQL\mysql-sandboxes" rmdir /s /q "C:\Users\dbabrabo-666\MySQL\mysql-sandboxes"ℹ️  Comando de limpeza preparado - execute manualmente se necessário✅ Preparação de limpeza de diretórios concluída⏳ Aguardando 5 segundos...✅ LIMPEZA CONCLUÍDA
================================================================================PHASE 1: CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================ℹ️  Criando instância primária 3307...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3307
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3307 successfully deployed and started.
Use shell.connect('root@localhost:3307') to connect to the instance.
✅ Instância primária 3307 criada e pronta (1/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3310...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3310
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310') to connect to the instance.
✅ Instância primária 3310 criada e pronta (2/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3320...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3320
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320') to connect to the instance.
✅ Instância primária 3320 criada e pronta (3/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3330...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3330
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330') to connect to the instance.
✅ Instância primária 3330 criada e pronta (4/4)⏳ Aguardando 2 segundos...
================================================================================PHASE 2: CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================ℹ️  Configurando instância 3307 para clustering...Configuring local MySQL instance listening at port 3307 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3307
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3307' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3307 configurada (1/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3310 para clustering...Configuring local MySQL instance listening at port 3310 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3310
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3310' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3310 configurada (2/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3320 para clustering...Configuring local MySQL instance listening at port 3320 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3320
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3320' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3320 configurada (3/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3330 para clustering...Configuring local MySQL instance listening at port 3330 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3330
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3330' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3330 configurada (4/4)⏳ Aguardando 1 segundos...
================================================================================PHASE 3: CRIAÇÃO DO CLUSTER INNODB================================================================================ℹ️  Conectando à instância primária (3307)...✅ Conectado à instância primáriaℹ️  Verificando se cluster 'my-cluster-db-v5' já existe...ERROR: Command not available on an unmanaged standalone instance.
ℹ️  Criando novo cluster 'my-cluster-db-v5'...A new InnoDB Cluster will be created on instance '127.0.0.1:3307'.
Validating instance configuration at localhost:3307...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3307
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3307'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
Creating InnoDB Cluster 'my-cluster-db-v5' on '127.0.0.1:3307'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
✅ Cluster 'my-cluster-db-v5' criado com sucessoℹ️  Aguardando estabilização do cluster primário...⏳ Aguardando 30 segundos...ℹ️  Status do cluster: OK_NO_TOLERANCE⚠️  Cluster primário pode não estar completamente estável
================================================================================PHASE 4: ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS================================================================================ℹ️  Adicionando instância 3310 ao cluster...❌ Erro ao adicionar instância 3310: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Adicionando instância 3320 ao cluster...❌ Erro ao adicionar instância 3320: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Adicionando instância 3330 ao cluster...❌ Erro ao adicionar instância 3330: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Aguardando sincronização completa do cluster...⏳ Aguardando 10 segundos...
================================================================================PHASE 5: CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS================================================================================Setting the value of 'memberWeight' to '100' in the instance: '127.0.0.1:3307' ...
Successfully set the value of 'memberWeight' to '100' in the cluster member: '127.0.0.1:3307'.
✅ Peso 100 configurado para instância 3307⚠️  Erro ao configurar peso para 3310: The instance '127.0.0.1:3310' does not belong to the cluster.⚠️  Erro ao configurar peso para 3320: The instance '127.0.0.1:3320' does not belong to the cluster.⚠️  Erro ao configurar peso para 3330: The instance '127.0.0.1:3330' does not belong to the cluster.✅ Configuração de pesos concluída
================================================================================PHASE 6: CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA================================================================================ℹ️  Processando réplica 3340 para fonte 3307...ℹ️  - Criando instância réplica 3340...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3340
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3340 successfully deployed and started.
Use shell.connect('root@localhost:3340') to connect to the instance.
ℹ️  - Configurando instância réplica 3340...Configuring local MySQL instance listening at port 3340 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3340
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3340' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3307...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3340 como réplica de leitura...Setting up '127.0.0.1:3340' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3340...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3340
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3340' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3340 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ============================================================    0%  In Progress
    REDO COPY  ============================================================    0%  Not Started
NOTE: 127.0.0.1:3340 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3340 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.95 MB transferred in about 1 second (~74.95 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3340 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3340' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3340' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3340 configurada para fonte 3307 (1/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3350 para fonte 3310...ℹ️  - Criando instância réplica 3350...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3350
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3350 successfully deployed and started.
Use shell.connect('root@localhost:3350') to connect to the instance.
ℹ️  - Configurando instância réplica 3350...Configuring local MySQL instance listening at port 3350 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3350
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3350' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3310...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3350 como réplica de leitura...Setting up '127.0.0.1:3350' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3350...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3350
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3350' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3350 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3350 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3350 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.88 MB transferred in about 1 second (~74.88 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3350 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3350' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3350' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3350 configurada para fonte 3310 (2/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3360 para fonte 3320...ℹ️  - Criando instância réplica 3360...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3360
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3360 successfully deployed and started.
Use shell.connect('root@localhost:3360') to connect to the instance.
ℹ️  - Configurando instância réplica 3360...Configuring local MySQL instance listening at port 3360 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3360
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3360' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3320...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3360 como réplica de leitura...Setting up '127.0.0.1:3360' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3360...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3360
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3360' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3360 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3360 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3360 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3360 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3360' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3360' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3360 configurada para fonte 3320 (3/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3370 para fonte 3330...ℹ️  - Criando instância réplica 3370...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3370
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3370 successfully deployed and started.
Use shell.connect('root@localhost:3370') to connect to the instance.
ℹ️  - Configurando instância réplica 3370...Configuring local MySQL instance listening at port 3370 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3370
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3370' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3330...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3370 como réplica de leitura...Setting up '127.0.0.1:3370' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3370...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3370
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3370' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3370 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3370 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3370' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3370' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3370 configurada para fonte 3330 (4/4)⏳ Aguardando 5 segundos...
================================================================================PHASE 7: VERIFICAÇÃO FINAL E STATUS================================================================================ℹ️  Aguardando estabilização final...⏳ Aguardando 10 segundos...
📊 STATUS COMPLETO DO CLUSTER:======================================================================={
  "clusterName": "my-cluster-db-v5",
  "defaultReplicaSet": {
    "GRProtocolVersion": "8.0.27",
    "communicationStack": "MYSQL",
    "groupName": "48fcdfde-4c7a-11f0-9ee3-18a59cb32d88",
    "groupViewChangeUuid": "AUTOMATIC",
    "groupViewId": "17502748550958043:1",
    "name": "default",
    "paxosSingleLeader": "OFF",
    "primary": "127.0.0.1:3307",
    "ssl": "REQUIRED",
    "status": "OK_NO_TOLERANCE",
    "statusText": "Cluster is NOT tolerant to any failures.",
    "topology": {
      "127.0.0.1:3307": {
        "address": "127.0.0.1:3307",
        "applierWorkerThreads": 4,
        "fenceSysVars": [],
        "memberId": "215a2338-4c7a-11f0-8f41-18a59cb32d88",
        "memberRole": "PRIMARY",
        "memberState": "ONLINE",
        "mode": "R/W",
        "readReplicas": {
          "Replica_Primary_3307": {
            "address": "127.0.0.1:3340",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "PRIMARY"
            ],
            "replicationSsl": "TLS_AES_128_GCM_SHA256 TLSv1.3",
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "9.3.0"
          },
          "Replica_Quaternary_3330": {
            "address": "127.0.0.1:3370",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "PRIMARY"
            ],
            "replicationSsl": "TLS_AES_128_GCM_SHA256 TLSv1.3",
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "9.3.0"
          },
          "Replica_Secondary_3310": {
            "address": "127.0.0.1:3350",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "PRIMARY"
            ],
            "replicationSsl": "TLS_AES_128_GCM_SHA256 TLSv1.3",
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "9.3.0"
          },
          "Replica_Tertiary_3320": {
            "address": "127.0.0.1:3360",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "PRIMARY"
            ],
            "replicationSsl": "TLS_AES_128_GCM_SHA256 TLSv1.3",
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "9.3.0"
          }
        },
        "replicationLag": "applier_queue_applied",
        "role": "HA",
        "status": "ONLINE",
        "version": "9.3.0"
      }
    },
    "topologyMode": "Single-Primary"
  },
  "groupInformationSourceMember": "127.0.0.1:3307",
  "metadataVersion": "2.3.0"
}
🎯 ANÁLISE DO STATUS:• Status Geral: OK_NO_TOLERANCE• Modo: Single-Primary• SSL Mode: REQUIRED
📊 RESUMO POR STATUS:• ONLINE: 1 instância(s)
🔗 TESTE DE CONECTIVIDADE:=======================================================================✅ Porta 3307: Conectividade OK - Server ID: 3820020054✅ Porta 3310: Conectividade OK - Server ID: 3359029909✅ Porta 3320: Conectividade OK - Server ID: 1516761045✅ Porta 3330: Conectividade OK - Server ID: 272308050
🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉CONFIGURAÇÃO CONCLUÍDA COM SUCESSO!🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉
📋 RESUMO DA CONFIGURAÇÃO:----------------------------------------------------------------------• Cluster Name: my-cluster-db-v5• Instâncias Primárias: 4 (3307, 3310, 3320, 3330)• Réplicas de Leitura: 4 (3340, 3350, 3360, 3370)• Total de Instâncias: 8• Arquitetura: 4-Node Primary + 4 Read Replicas (1:1)
🔗 MAPEAMENTO DE RÉPLICAS:----------------------------------------------------------------------• 3307 → 3340 (Replica_Primary_3307)• 3310 → 3350 (Replica_Secondary_3310)• 3320 → 3360 (Replica_Tertiary_3320)• 3330 → 3370 (Replica_Quaternary_3330)
⚖️  PESOS CONFIGURADOS:----------------------------------------------------------------------• Porta 3307: Peso 100• Porta 3310: Peso 60• Porta 3320: Peso 40• Porta 3330: Peso 20
🚀 PRÓXIMOS PASSOS:----------------------------------------------------------------------• Configurar MySQL Router para balanceamento de carga• Implementar monitoramento e alertas• Configurar backups automatizados• Testar failover e recuperação• Ajustar configurações de performance conforme necessário
💡 COMANDOS ÚTEIS:----------------------------------------------------------------------• Status do cluster: cluster.status({extended: true})• Conectar ao cluster: shell.connect('root@localhost:3307')• Obter cluster: dba.getCluster('my-cluster-db-v5')• Rescan do cluster: cluster.rescan()✅ Script executado com sucesso!
PS C:\Users\dbabrabo-666>

Versão do script para Linux/macOS/Unix

// ============================================================================================
// MYSQL INNODB CLUSTER - ENTERPRISE PRODUCTION SETUP
// 4-NODE CLUSTER WITH 1:1 READ REPLICAS
// COMPLETE CLEANUP + VERIFICATION + ERROR HANDLING
// ============================================================================================
// 
// DESCRIÇÃO:
//   Script automatizado para criar um cluster MySQL InnoDB com 4 nós primários e 4 réplicas
//   de leitura (1:1). Inclui limpeza completa, validação e tratamento de erros.
//
// REQUISITOS:
//   - MySQL Shell 8.0+
//   - Sistema Operacional: Linux/macOS
//   - RAM: Mínimo 2GB livre
//   - Disco: Mínimo 5GB livre
//   - Portas: 3307-3370 devem estar livres
//
// AUTOR: Acacio LR - DBA
// ============================================================================================
// COMO USAR ESTE SCRIPT:
// ============================================================================================
//
// 1. SALVAR O SCRIPT:
//    Salve este arquivo como 'mysql_innodb_cluster_macOS_mb.js' em seu diretório home:
//    $ nano ~/mysql_innodb_cluster_macOS_mb.js
//    (cole o conteúdo e salve com Ctrl+X, Y, Enter)
//
// 2. EXECUTAR O SCRIPT (escolha uma opção):
//
//    OPÇÃO A - Execução Simples:
//    $ mysqlsh --file ~/mysql_innodb_cluster_macOS_mb.js
//
//    OPÇÃO B - Com Log Detalhado:
//    $ mysqlsh --file mysql_innodb_cluster_macOS_mb.js --log-level=8 --log-file=/tmp/cluster.log
//
//    OPÇÃO C - Dentro do MySQL Shell:
//    $ mysqlsh
//    MySQL JS> \source ~/mysql_innodb_cluster_macOS_mb.js
//
// 3. MONITORAR EXECUÇÃO (em outro terminal):
//    $ tail -f /tmp/cluster_setup.log
// 
// ============================================================================================

// Configuration Constants
const CONFIG = {
  ports: [3307, 3310, 3320, 3330, 3340, 3350, 3360, 3370],
  primaryPorts: [3307, 3310, 3320, 3330],
  replicaPorts: [3340, 3350, 3360, 3370],
  password: 'Welcome1',
  clusterName: 'my-cluster-db-v5',
  sandboxPath: '/Users/acaciolr/mysql-sandboxes',
  replicationUser: {
    username: 'repl',
    password: 'Welcome1'
  },
  weights: {
    3307: 100,
    3310: 60,
    3320: 40,
    3330: 20
  },
  timeouts: {
    clusterCreation: 30,
    instanceAdd: 20,
    stabilization: 15,
    recovery: 10
  }
};

const firstPrimaryPort = CONFIG.primaryPorts[0];

// Replica mapping (1:1 relationship)
const REPLICA_MAPPING = [
  { port: 3340, source: 3307, label: 'Replica_Primary_3307' },
  { port: 3350, source: 3310, label: 'Replica_Secondary_3310' },
  { port: 3360, source: 3320, label: 'Replica_Tertiary_3320' },
  { port: 3370, source: 3330, label: 'Replica_Quaternary_3330' }
];

// Utility Functions
function printPhase(phase, description) {
  const separator = '='.repeat(80);
  print("\n" + separator);
  print("PHASE " + phase + ": " + description.toUpperCase());
  print(separator + "\n");
}

function printSuccess(message) {
  print("✅ " + message);
}

function printWarning(message) {
  print("⚠️  " + message);
}

function printError(message) {
  print("❌ " + message);
}

function printInfo(message) {
  print("ℹ️  " + message);
}

function sleep(seconds) {
  print("⏳ Aguardando " + seconds + " segundos...\n");
  os.sleep(seconds);
}

function waitForInstanceReady(port, maxRetries = 15) {
  let retries = 0;
  while (retries &lt; maxRetries) {
    try {
      const testSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + port);
      testSession.runSql("SELECT 1");
      testSession.close();
      return true;
    } catch (e) {
      retries++;
      print("   Tentativa " + retries + "/" + maxRetries + " - Aguardando instância " + port + "...");
      sleep(3);
    }
  }
  return false;
}

function checkClusterHealth(cluster) {
  try {
    const status = cluster.status();
    const healthy = status.defaultReplicaSet.status === 'OK' || 
                   status.defaultReplicaSet.status === 'OK_NO_TOLERANCE' ||
                   status.defaultReplicaSet.status === 'OK_PARTIAL';
    printInfo("Status do cluster: " + status.defaultReplicaSet.status);
    return healthy;
  } catch (e) {
    printWarning("Erro ao verificar saúde do cluster: " + e.message);
    return false;
  }
}

function safeKillSandbox(port) {
  try {
    dba.killSandboxInstance(port);
    printInfo("Instância " + port + " encerrada");
  } catch (e) {
    if (e.message.includes("Unable to find pid file") || 
        e.message.includes("does not exist") ||
        e.message.includes("not found")) {
      // Silently ignore if instance doesn't exist
    } else {
      printWarning("Erro ao encerrar " + port + ": " + e.message);
    }
  }
}

function safeDeleteSandbox(port) {
  try {
    dba.deleteSandboxInstance(port);
    printInfo("Instância " + port + " removida");
  } catch (e) {
    if (e.message.includes("does not exist") || 
        e.message.includes("not found")) {
      // Silently ignore if instance doesn't exist
    } else {
      printWarning("Erro ao remover " + port + ": " + e.message);
    }
  }
}

function safeCleanDirectories() {
  try {
    const command = "rm -rf " + CONFIG.sandboxPath;
    printInfo("Comando de limpeza preparado: " + command);
    printSuccess("Preparação de limpeza de diretórios concluída");
  } catch (e) {
    printWarning("Não foi possível limpar diretórios automaticamente: " + e.message);
    printInfo("Execute manualmente: rm -rf " + CONFIG.sandboxPath);
  }
}

// Main execution wrapped in try-catch
try {
  
  // ==============================================
  // PHASE 0: COMPREHENSIVE CLEANUP
  // ==============================================
  printPhase(0, "LIMPEZA COMPLETA DO AMBIENTE");
  
  try {
    // Dissolve existing cluster
    try {
      printInfo("Verificando cluster existente...");
      const existingCluster = dba.getCluster();
      if (existingCluster) {
        printInfo("Dissolvendo cluster existente...");
        existingCluster.dissolve({ force: true });
        printSuccess("Cluster existente dissolvido com sucesso\n");
        sleep(3);
      }
    } catch (e) {
      printWarning("Nenhum cluster ativo encontrado: Iniciando nova configuração\n");
    }
    
    // Kill and delete all sandbox instances
    printInfo("Removendo todas as instâncias sandbox...");
    CONFIG.ports.forEach(port => {
      safeKillSandbox(port);
    });
    
    print(""); // Linha em branco
    
    CONFIG.ports.forEach(port => {
      safeDeleteSandbox(port);
    });
    
    print(""); // Linha em branco
    
    // Clean sandbox directories safely
    safeCleanDirectories();
    
    sleep(CONFIG.timeouts.recovery);
    printSuccess("LIMPEZA CONCLUÍDA\n");
    
  } catch (cleanupErr) {
    printError("Erro durante cleanup: " + cleanupErr.message);
  }
  
  // ==============================================
  // PHASE 1: DEPLOY PRIMARY INSTANCES
  // ==============================================
  printPhase(1, "CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS");
  
  CONFIG.primaryPorts.forEach((port, index) => {
    try {
      printInfo("Criando instância primária " + port + "...");
      
      dba.deploySandboxInstance(port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      if (waitForInstanceReady(port)) {
        printSuccess("Instância primária " + port + " criada e pronta (" + (index + 1) + "/" + CONFIG.primaryPorts.length + ")\n");
      } else {
        throw new Error("Instância " + port + " não ficou pronta no tempo esperado");
      }
      
      sleep(2);
    } catch (e) {
      if (e.message.includes("already exists")) {
        printWarning("Instância " + port + " já existe\n");
      } else {
        printError("Erro ao criar instância " + port + ": " + e.message);
        throw e;
      }
    }
  });
  
  // ==============================================
  // PHASE 2: CONFIGURE PRIMARY INSTANCES
  // ==============================================
  printPhase(2, "CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS");
  
  CONFIG.primaryPorts.forEach((port, index) => {
    try {
      printInfo("Configurando instância " + port + " para clustering...");
      
      dba.configureInstance("root:" + CONFIG.password + "@localhost:" + port, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      printSuccess("Instância " + port + " configurada (" + (index + 1) + "/" + CONFIG.primaryPorts.length + ")\n");
      sleep(1);
    } catch (e) {
      printError("Erro ao configurar instância " + port + ": " + e.message);
      throw e;
    }
  });
  
  // ==============================================
  // PHASE 3: CLUSTER CREATION
  // ==============================================
  printPhase(3, "CRIAÇÃO DO CLUSTER INNODB");
  
  let cluster;
  try {
    printInfo("Conectando à instância primária (" + firstPrimaryPort + ")...");
    shell.connect("root:" + CONFIG.password + "@localhost:" + firstPrimaryPort);
    printSuccess("Conectado à instância primária\n");
    
    try {
      printInfo("Verificando se cluster '" + CONFIG.clusterName + "' já existe...");
      cluster = dba.getCluster(CONFIG.clusterName);
      printSuccess("Cluster '" + CONFIG.clusterName + "' existente carregado\n");
    } catch {
      printInfo("Criando novo cluster '" + CONFIG.clusterName + "'...");
      
      cluster = dba.createCluster(CONFIG.clusterName, {
        multiPrimary: false,
        force: true,
        gtidSetIsComplete: true
      });
      
      printSuccess("Cluster '" + CONFIG.clusterName + "' criado com sucesso\n");
      printInfo("Aguardando estabilização do cluster primário...\n");
      sleep(CONFIG.timeouts.clusterCreation);
      
      if (checkClusterHealth(cluster)) {
        printSuccess("Cluster primário está funcionando corretamente\n");
      } else {
        printWarning("Cluster primário pode não estar completamente estável\n");
      }
    }
    
  } catch (e) {
    printError("Erro na criação/carregamento do cluster: " + e.message);
    throw e;
  }
  
  // ==============================================
  // PHASE 4: ADD SECONDARY INSTANCES TO CLUSTER
  // ==============================================
  printPhase(4, "ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS AO CLUSTER");
  
  const secondaryPorts = CONFIG.primaryPorts.slice(1);
  let addedCount = 0;
  
  secondaryPorts.forEach((port, index) => {
    let retryCount = 0;
    const maxRetries = 3;
    let added = false;
    
    while (!added &amp;&amp; retryCount &lt; maxRetries) {
      try {
        retryCount++;
        printInfo("Adicionando instância " + port + " ao cluster (tentativa " + retryCount + "/" + maxRetries + ")...");
        
        const currentStatus = cluster.status();
        const instanceKey = "127.0.0.1:" + port;
        
        if (currentStatus.defaultReplicaSet.topology[instanceKey]) {
          printWarning("Instância " + port + " já está no cluster\n");
          added = true;
          addedCount++;
          break;
        }
        
        // ADD INSTANCE TRADICIONAL COM CLONE
        cluster.addInstance("root:" + CONFIG.password + "@localhost:" + port, {
          recoveryMethod: 'clone'
        });
        
        printSuccess("Instância " + port + " adicionada ao cluster (" + (index + 1) + "/" + secondaryPorts.length + ")\n");
        added = true;
        addedCount++;
        
        printInfo("Aguardando sincronização da instância " + port + "...\n");
        sleep(CONFIG.timeouts.instanceAdd);
        
        // Verificar status após adicionar
        let instanceOnline = false;
        let checkCount = 0;
        const maxChecks = 10;
        
        while (!instanceOnline &amp;&amp; checkCount &lt; maxChecks) {
          checkCount++;
          const status = cluster.status();
          const instanceStatus = status.defaultReplicaSet.topology[instanceKey];
          
          if (instanceStatus &amp;&amp; instanceStatus.status === 'ONLINE') {
            printSuccess("Instância " + port + " está ONLINE no cluster\n");
            instanceOnline = true;
          } else if (instanceStatus &amp;&amp; instanceStatus.status === 'RECOVERING') {
            printInfo("Instância " + port + " está em RECOVERING, aguardando... (" + checkCount + "/" + maxChecks + ")\n");
            sleep(5);
          } else {
            printWarning("Instância " + port + " status: " + (instanceStatus ? instanceStatus.status : "DESCONHECIDO") + "\n");
            sleep(5);
          }
        }
        
      } catch (e) {
        printError("Erro ao adicionar instância " + port + " (tentativa " + retryCount + "): " + e.message + "\n");
        
        if (retryCount &lt; maxRetries) {
          printInfo("Tentando novamente em 10 segundos...\n");
          sleep(10);
          
          try {
            printInfo("Tentando rejoin da instância " + port + "...");
            cluster.rejoinInstance("root:" + CONFIG.password + "@localhost:" + port);
            printSuccess("Instância " + port + " rejoin bem-sucedido\n");
            added = true;
            addedCount++;
          } catch (rejoinErr) {
            printWarning("Rejoin falhou: " + rejoinErr.message + "\n");
          }
        }
      }
    }
    
    if (!added) {
      printError("Falha ao adicionar instância " + port + " após " + maxRetries + " tentativas");
      printWarning("Continuando com as próximas instâncias...\n");
    }
  });
  
  printInfo("Total de instâncias secundárias adicionadas: " + addedCount + "/" + secondaryPorts.length);
  printInfo("Aguardando sincronização completa do cluster...\n");
  sleep(CONFIG.timeouts.stabilization);
  
  printInfo("Verificando status do cluster após adição de instâncias...");
  const clusterStatusAfterAdd = cluster.status();
  const topologyCount = Object.keys(clusterStatusAfterAdd.defaultReplicaSet.topology).length;
  printInfo("Total de nós no cluster: " + topologyCount + "\n");
  
  if (topologyCount &lt; 4) {
    printWarning("ATENÇÃO: Cluster tem apenas " + topologyCount + " nós, esperado 4");
    printInfo("Tentando rescan do cluster...\n");
    cluster.rescan();
  }
  
  // ==============================================
  // PHASE 5: CONFIGURE INSTANCE WEIGHTS
  // ==============================================
  printPhase(5, "CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS");
  
  try {
    Object.entries(CONFIG.weights).forEach(([port, weight]) => {
      try {
        cluster.setInstanceOption("127.0.0.1:" + port, 'memberWeight', weight);
        printSuccess("Peso " + weight + " configurado para instância " + port);
      } catch (e) {
        printWarning("Erro ao configurar peso para " + port + ": " + e.message);
      }
    });
    print(""); // Linha em branco
    printSuccess("Configuração de pesos concluída\n");
  } catch (e) {
    printWarning("Erro geral na configuração de pesos: " + e.message + "\n");
  }
  
  // ==============================================
  // PHASE 5.5: CREATE REPLICATION USERS ON PRIMARY
  // ==============================================
  printPhase(5.5, "CRIAÇÃO DE USUÁRIOS DE REPLICAÇÃO");
  
  try {
    printInfo("Criando usuário de replicação na instância primária (3307)...");
    const primarySession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + firstPrimaryPort);
    
    // Criar usuário de replicação com todos os privilégios necessários
    primarySession.runSql("CREATE USER IF NOT EXISTS '" + CONFIG.replicationUser.username + "'@'%' IDENTIFIED BY '" + CONFIG.replicationUser.password + "'");
    primarySession.runSql("GRANT REPLICATION SLAVE ON *.* TO '" + CONFIG.replicationUser.username + "'@'%'");
    primarySession.runSql("GRANT BACKUP_ADMIN ON *.* TO '" + CONFIG.replicationUser.username + "'@'%'");
    primarySession.runSql("GRANT CLONE_ADMIN ON *.* TO '" + CONFIG.replicationUser.username + "'@'%'");
    primarySession.runSql("GRANT SELECT ON *.* TO '" + CONFIG.replicationUser.username + "'@'%'");
    primarySession.runSql("FLUSH PRIVILEGES");
    primarySession.close();
    
    printSuccess("Usuário de replicação criado com sucesso na instância primária\n");
    
    // Aguardar propagação para os nós secundários
    printInfo("Aguardando propagação do usuário para os nós secundários...\n");
    sleep(5);
    
    // Verificar se o usuário foi propagado para os nós secundários
    const secondaryPortsCheck = CONFIG.primaryPorts.slice(1);
    secondaryPortsCheck.forEach(port => {
      try {
        const testSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + port);
        const result = testSession.runSql("SELECT user FROM mysql.user WHERE user = '" + CONFIG.replicationUser.username + "'");
        if (result.fetchOne()) {
          printSuccess("Usuário de replicação confirmado no nó " + port);
        } else {
          printWarning("Usuário de replicação não encontrado no nó " + port);
        }
        testSession.close();
      } catch (e) {
        printWarning("Não foi possível verificar usuário no nó " + port + ": " + e.message);
      }
    });
    print(""); // Linha em branco
    
  } catch (e) {
    printError("Erro ao criar usuário de replicação: " + e.message);
    printInfo("Continuando sem usuário de replicação dedicado...\n");
  }
  
  // ==============================================
  // PHASE 6: DEPLOY AND CONFIGURE READ REPLICAS
  // ==============================================
  printPhase(6, "CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA");
  
  // Primeiro, vamos verificar quais nós estão realmente no cluster
  const currentClusterStatus = cluster.status();
  printInfo("Verificando nós disponíveis no cluster para réplicas...\n");
  
  for (let index = 0; index &lt; REPLICA_MAPPING.length; index++) {
    const replica = REPLICA_MAPPING[index];
    
    try {
      printInfo("Processando réplica " + replica.port + " para fonte " + replica.source + "...\n");
      
      // Verificar se a fonte está disponível primeiro
      const sourceKey = "127.0.0.1:" + replica.source;
      const sourceNode = currentClusterStatus.defaultReplicaSet.topology[sourceKey];
      
      // Se a fonte não está no cluster, pular esta réplica
      if (!sourceNode) {
        printWarning("Nó fonte " + replica.source + " não está no cluster, pulando réplica " + replica.port + "\n");
        continue;
      }
      
      // Se a fonte não está ONLINE, pular esta réplica
      if (sourceNode.status !== 'ONLINE') {
        printWarning("Nó fonte " + replica.source + " está " + sourceNode.status + ", pulando réplica " + replica.port + "\n");
        continue;
      }
      
      printInfo("Nó fonte " + replica.source + " está ONLINE, criando réplica " + replica.port + "...\n");
      
      // PASSO 1: Criar a instância réplica
      printInfo("- Criando instância réplica " + replica.port + "...");
      dba.deploySandboxInstance(replica.port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      if (!waitForInstanceReady(replica.port)) {
        throw new Error("Réplica " + replica.port + " não ficou pronta");
      }
      
      // PASSO 2: Configurar a instância réplica
      printInfo("- Configurando instância réplica " + replica.port + "...");
      dba.configureInstance("root:" + CONFIG.password + "@localhost:" + replica.port, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      // PASSO 3: Se necessário, desabilitar temporariamente super-read-only no nó fonte
      let needsReadOnlyDisable = false;
      if (replica.source !== firstPrimaryPort) {
        needsReadOnlyDisable = true;
        try {
          printInfo("- Desabilitando temporariamente super-read-only no nó " + replica.source + "...");
          const sourceSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + replica.source);
          sourceSession.runSql("SET GLOBAL super_read_only = 0");
          
          // Criar/verificar usuário de replicação no nó secundário
          sourceSession.runSql("CREATE USER IF NOT EXISTS '" + CONFIG.replicationUser.username + "'@'localhost' IDENTIFIED BY '" + CONFIG.replicationUser.password + "'");
          sourceSession.runSql("GRANT REPLICATION SLAVE ON *.* TO '" + CONFIG.replicationUser.username + "'@'localhost'");
          sourceSession.runSql("FLUSH PRIVILEGES");
          
          sourceSession.close();
          printSuccess("Super-read-only desabilitado temporariamente no nó " + replica.source);
        } catch (e) {
          printWarning("Não foi possível desabilitar super-read-only no nó " + replica.source + ": " + e.message);
          needsReadOnlyDisable = false;
        }
      }
      
      sleep(3);
      
      // PASSO 4: Adicionar a réplica ao cluster
      printInfo("- Adicionando " + replica.port + " como réplica de leitura anexada ao nó " + replica.source + "...");
      
      try {
        // Adicionar réplica especificamente ao nó fonte
        cluster.addReplicaInstance("root:" + CONFIG.password + "@localhost:" + replica.port, {
          label: replica.label,
          recoveryMethod: 'clone',
          replicationSources: ["127.0.0.1:" + replica.source]
        });
        
        printSuccess("Réplica " + replica.port + " configurada e anexada ao nó " + replica.source + " (" + (index + 1) + "/" + REPLICA_MAPPING.length + ")\n");
        sleep(CONFIG.timeouts.recovery);
        
      } catch (replicaErr) {
        printError("Erro ao adicionar réplica " + replica.port + ": " + replicaErr.message);
        
        // Tentar método alternativo se falhar
        try {
          printInfo("Tentando método alternativo para adicionar réplica...");
          cluster.addReplicaInstance("root:" + CONFIG.password + "@localhost:" + replica.port, {
            label: replica.label,
            recoveryMethod: 'clone'
          });
          printSuccess("Réplica " + replica.port + " adicionada com método alternativo\n");
        } catch (altErr) {
          printError("Método alternativo também falhou: " + altErr.message + "\n");
        }
      }
      
      // PASSO 5: Reabilitar super-read-only se foi desabilitado
      if (needsReadOnlyDisable) {
        try {
          printInfo("- Reabilitando super-read-only no nó " + replica.source + "...");
          const sourceSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + replica.source);
          sourceSession.runSql("SET GLOBAL super_read_only = 1");
          sourceSession.close();
          printSuccess("Super-read-only reabilitado no nó " + replica.source + "\n");
        } catch (e) {
          printWarning("Não foi possível reabilitar super-read-only no nó " + replica.source + ": " + e.message + "\n");
        }
      }
      
    } catch (e) {
      printError("Erro na configuração da réplica " + replica.port + ": " + e.message + "\n");
      
      try {
        safeKillSandbox(replica.port);
        safeDeleteSandbox(replica.port);
        printInfo("Limpeza da réplica " + replica.port + " concluída\n");
      } catch (cleanupErr) {
        printWarning("Erro na limpeza da réplica " + replica.port + ": " + cleanupErr.message + "\n");
      }
    }
  }
  
  // PASSO FINAL: Garantir que super-read-only está habilitado em todos os nós secundários
  printInfo("Verificando configuração final de super-read-only...\n");
  const secondaryPortsFinal = CONFIG.primaryPorts.slice(1);
  secondaryPortsFinal.forEach(port => {
    try {
      const session = mysql.getSession("root:" + CONFIG.password + "@localhost:" + port);
      const result = session.runSql("SELECT @@super_read_only");
      const row = result.fetchOne();
      if (row[0] === 0) {
        session.runSql("SET GLOBAL super_read_only = 1");
        printSuccess("Super-read-only reabilitado no nó " + port);
      } else {
        printInfo("Super-read-only já está habilitado no nó " + port);
      }
      session.close();
    } catch (e) {
      printWarning("Não foi possível verificar super-read-only no nó " + port + ": " + e.message);
    }
  });
  print(""); // Linha em branco
  
  // ==============================================
  // PHASE 7: FINAL VERIFICATION AND STATUS
  // ==============================================
  printPhase(7, "VERIFICAÇÃO FINAL E STATUS");
  
  try {
    printInfo("Aguardando estabilização final...\n");
    sleep(CONFIG.timeouts.stabilization);
    
    print("\n📊 STATUS COMPLETO DO CLUSTER:");
    print("=" + "=".repeat(70) + "\n");
    
    try {
      const clusterStatus = cluster.status({extended: true});
      print(JSON.stringify(clusterStatus, null, 2));
      print("\n"); // Linha em branco
      
      const defaultReplicaSet = clusterStatus.defaultReplicaSet;
      print("🎯 ANÁLISE DO STATUS:");
      print("• Status Geral: " + defaultReplicaSet.status);
      print("• Modo: " + (defaultReplicaSet.mode || 'Single-Primary'));
      print("• SSL Mode: " + (defaultReplicaSet.ssl || 'N/A'));
      print("\n"); // Linha em branco
      
      const topology = defaultReplicaSet.topology;
      const statusCount = {};
      let onlineNodes = 0;
      let totalReplicas = 0;
      const replicaDetails = [];
      
      Object.entries(topology).forEach(([key, instance]) => {
        const status = instance.status;
        statusCount[status] = (statusCount[status] || 0) + 1;
        
        if (status === 'ONLINE') {
          onlineNodes++;
        }
        
        if (instance.readReplicas) {
          const replicaCount = Object.keys(instance.readReplicas).length;
          totalReplicas += replicaCount;
          if (replicaCount > 0) {
            Object.entries(instance.readReplicas).forEach(([replicaKey, replicaInfo]) => {
              replicaDetails.push("  • " + key + " → " + replicaKey + " (" + replicaInfo.status + ")");
            });
          }
        }
      });
      
      print("📊 RESUMO POR STATUS:");
      Object.entries(statusCount).forEach(([status, count]) => {
        print("• " + status + ": " + count + " instância(s)");
      });
      print("\n"); // Linha em branco
      
      print("📈 ESTATÍSTICAS DO CLUSTER:");
      print("• Nós ONLINE no cluster: " + onlineNodes);
      print("• Total de réplicas de leitura: " + totalReplicas);
      print("• Tolerância a falhas: " + (onlineNodes >= 3 ? "SIM" : "NÃO"));
      print("\n"); // Linha em branco
      
      if (replicaDetails.length > 0) {
        print("📚 RÉPLICAS DE LEITURA ANEXADAS:");
        replicaDetails.forEach(detail => print(detail));
        print("\n"); // Linha em branco
      }
      
    } catch (e) {
      printError("Erro ao obter status do cluster: " + e.message + "\n");
    }
    
    print("🔗 TESTE DE CONECTIVIDADE:");
    print("=" + "=".repeat(70) + "\n");
    CONFIG.primaryPorts.forEach(port => {
      try {
        const testSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + port);
        const result = testSession.runSql("SELECT @@hostname, @@port, @@server_id");
        const row = result.fetchOne();
        printSuccess("Porta " + port + ": Conectividade OK - Server ID: " + row[2]);
        testSession.close();
      } catch (e) {
        printError("Porta " + port + ": Erro de conectividade - " + e.message);
      }
    });
    print("\n"); // Linha em branco
    
    print("🔗 TESTE DE CONECTIVIDADE DAS RÉPLICAS:");
    print("=" + "=".repeat(70) + "\n");
    CONFIG.replicaPorts.forEach(port => {
      try {
        const testSession = mysql.getSession("root:" + CONFIG.password + "@localhost:" + port);
        const result = testSession.runSql("SELECT @@hostname, @@port, @@server_id");
        const row = result.fetchOne();
        printSuccess("Réplica " + port + ": Conectividade OK - Server ID: " + row[2]);
        testSession.close();
      } catch (e) {
        printWarning("Réplica " + port + ": Não disponível");
      }
    });
    print("\n"); // Linha em branco
    
  } catch (e) {
    printWarning("Erro na verificação final: " + e.message + "\n");
  }
  
  // ==============================================
  // FINAL SUMMARY
  // ==============================================
  print("\n" + "=".repeat(80));
  print("🎉 CONFIGURAÇÃO CONCLUÍDA COM SUCESSO! 🎉");
  print("=".repeat(80) + "\n");
  
  print("📋 RESUMO DA CONFIGURAÇÃO:");
  print("-".repeat(70));
  print("• Cluster Name: " + CONFIG.clusterName);
  print("• Instâncias Primárias: " + CONFIG.primaryPorts.length + " (" + CONFIG.primaryPorts.join(', ') + ")");
  print("• Réplicas de Leitura: " + REPLICA_MAPPING.length + " (" + REPLICA_MAPPING.map(r => r.port).join(', ') + ")");
  print("• Total de Instâncias: " + CONFIG.ports.length);
  print("• Arquitetura: 4-Node Cluster + 4 Read Replicas (1:1)");
  print("\n"); // Linha em branco
  
  print("🔗 MAPEAMENTO DE RÉPLICAS:");
  print("-".repeat(70));
  REPLICA_MAPPING.forEach(replica => {
    print("• Nó " + replica.source + " → Réplica " + replica.port + " (" + replica.label + ")");
  });
  print("\n"); // Linha em branco
  
  print("⚖️  PESOS CONFIGURADOS:");
  print("-".repeat(70));
  Object.entries(CONFIG.weights).forEach(([port, weight]) => {
    print("• Porta " + port + ": Peso " + weight);
  });
  print("\n"); // Linha em branco
  
  print("🚀 PRÓXIMOS PASSOS:");
  print("-".repeat(70));
  print("• Configurar MySQL Router para balanceamento de carga");
  print("• Implementar monitoramento e alertas");
  print("• Configurar backups automatizados");
  print("• Testar failover e recuperação");
  print("• Ajustar configurações de performance conforme necessário");
  print("\n"); // Linha em branco
  
  print("💡 COMANDOS ÚTEIS:");
  print("-".repeat(70));
  print("• Status do cluster: cluster.status({extended: true})");
  print("• Conectar ao cluster: shell.connect('root@localhost:3307')");
  print("• Obter cluster: dba.getCluster('" + CONFIG.clusterName + "')");
  print("• Rescan do cluster: cluster.rescan()");
  print("• Verificar réplicas: cluster.listRouters()");
  print("\n"); // Linha em branco
  
  print("📋 COMANDOS PARA MONITORAMENTO (macOS/Linux):");
  print("-".repeat(70));
  print("# Monitorar log em tempo real:");
  print("tail -f /tmp/cluster_setup.log");
  print("");
  print("# Verificar portas em uso:");
  print("lsof -i -P | grep LISTEN | grep :33");
  print("");
  print("# Verificar processos MySQL:");
  print("ps aux | grep mysql");
  print("\n"); // Linha em branco
  
  print("=".repeat(80));
  printSuccess("✨ Script executado com sucesso! ✨");
  print("=".repeat(80) + "\n");

} catch (mainErr) {
  // ==============================================
  // EMERGENCY ERROR HANDLING
  // ==============================================
  print("\n" + "=".repeat(80));
  print("🚨 ERRO CRÍTICO DETECTADO - INICIANDO LIMPEZA DE EMERGÊNCIA 🚨");
  print("=".repeat(80) + "\n");
  
  printError("ERRO PRINCIPAL: " + mainErr.message);
  printError("STACK TRACE: " + (mainErr.stack || 'N/A') + "\n");
  
  printInfo("Executando limpeza de emergência...\n");
  
  try {
    try {
      const emergencyCluster = dba.getCluster();
      if (emergencyCluster) {
        emergencyCluster.dissolve({ force: true });
        printInfo("Cluster dissolvido durante limpeza de emergência\n");
      }
    } catch (e) {
      printWarning("Erro ao dissolver cluster: " + e.message + "\n");
    }
    
    printInfo("Removendo todas as instâncias sandbox...");
    CONFIG.ports.forEach(port => {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    print("\n"); // Linha em branco
    
    safeCleanDirectories();
    
    printSuccess("Limpeza de emergência concluída\n");
    
  } catch (emergencyErr) {
    printError("Erro durante limpeza de emergência: " + emergencyErr.message + "\n");
  }
  
  print("💡 SUGESTÕES PARA RESOLUÇÃO:");
  print("-".repeat(70));
  print("• Verifique se as portas estão disponíveis: lsof -i :330");
  print("• Confirme se o MySQL Shell tem permissões adequadas");
  print("• Verifique a conectividade de rede");
  print("• Analise os logs do MySQL para erros específicos");
  print("• Execute o script novamente após corrigir os problemas");
  print("• Verifique se há processos MySQL em execução: ps aux | grep mysql");
  print("• Limpe manualmente o diretório: rm -rf " + CONFIG.sandboxPath);
  print("\n"); // Linha em branco
  
  print("🔧 COMANDOS DE LIMPEZA MANUAL:");
  print("-".repeat(70));
  print("# Parar e remover todas as instâncias:");
  print("for port in 3307 3310 3320 3330 3340 3350 3360 3370; do");
  print("  mysqlsh --js -e \"try{dba.killSandboxInstance($port)}catch(e){}\"");
  print("  mysqlsh --js -e \"try{dba.deleteSandboxInstance($port)}catch(e){}\"");
  print("done");
  print("");
  print("# Limpar diretório de sandboxes:");
  print("rm -rf ~/mysql-sandboxes\n");
  
  throw mainErr;
}
┌[acaciolr☮MacBook-Pro-de-Acacio.local]-(~/Library/Mobile Documents/com~apple~CloudDocs/DBA/DBA Scripts/MySQL)
└> mysqlsh --file mysql_innodb_cluster_macOS_mb.js --log-level=8 --log-file=/tmp/cluster.log

================================================================================PHASE 0: LIMPEZA COMPLETA DO AMBIENTE================================================================================
ℹ️  Verificando cluster existente...⚠️  Nenhum cluster ativo encontrado: Iniciando nova configuração
ℹ️  Removendo todas as instâncias sandbox...
Killing MySQL instance...

Instance localhost:3307 successfully killed.

ℹ️  Instância 3307 encerrada
Killing MySQL instance...

Instance localhost:3310 successfully killed.

ℹ️  Instância 3310 encerrada
Killing MySQL instance...

Instance localhost:3320 successfully killed.

ℹ️  Instância 3320 encerrada
Killing MySQL instance...

Instance localhost:3330 successfully killed.

ℹ️  Instância 3330 encerrada
Killing MySQL instance...

Instance localhost:3340 successfully killed.

ℹ️  Instância 3340 encerrada
Killing MySQL instance...

Killing MySQL instance...

Killing MySQL instance...

Deleting MySQL instance...

Instance localhost:3307 successfully deleted.

ℹ️  Instância 3307 removida
Deleting MySQL instance...

Instance localhost:3310 successfully deleted.

ℹ️  Instância 3310 removida
Deleting MySQL instance...

Instance localhost:3320 successfully deleted.

ℹ️  Instância 3320 removida
Deleting MySQL instance...

Instance localhost:3330 successfully deleted.

ℹ️  Instância 3330 removida
Deleting MySQL instance...

Instance localhost:3340 successfully deleted.

ℹ️  Instância 3340 removida
Deleting MySQL instance...

Deleting MySQL instance...

Deleting MySQL instance...
ℹ️  Comando de limpeza preparado: rm -rf /Users/acaciolr/mysql-sandboxes✅ Preparação de limpeza de diretórios concluída⏳ Aguardando 10 segundos...
✅ LIMPEZA CONCLUÍDA

================================================================================PHASE 1: CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================
ℹ️  Criando instância primária 3307...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3307

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3307 successfully deployed and started.
Use shell.connect('root@localhost:3307') to connect to the instance.

✅ Instância primária 3307 criada e pronta (1/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3310...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3310

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310') to connect to the instance.

✅ Instância primária 3310 criada e pronta (2/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3320...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3320

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320') to connect to the instance.

✅ Instância primária 3320 criada e pronta (3/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3330...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3330

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330') to connect to the instance.

✅ Instância primária 3330 criada e pronta (4/4)
⏳ Aguardando 2 segundos...

================================================================================PHASE 2: CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================
ℹ️  Configurando instância 3307 para clustering...Configuring local MySQL instance listening at port 3307 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3307
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3307' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3307 configurada (1/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3310 para clustering...Configuring local MySQL instance listening at port 3310 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3310' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3310 configurada (2/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3320 para clustering...Configuring local MySQL instance listening at port 3320 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3320' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3320 configurada (3/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3330 para clustering...Configuring local MySQL instance listening at port 3330 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3330
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3330' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3330 configurada (4/4)
⏳ Aguardando 1 segundos...

================================================================================PHASE 3: CRIAÇÃO DO CLUSTER INNODB================================================================================
ℹ️  Conectando à instância primária (3307)...✅ Conectado à instância primária
ℹ️  Verificando se cluster 'my-cluster-db-v5' já existe...ERROR: Command not available on an unmanaged standalone instance.
ℹ️  Criando novo cluster 'my-cluster-db-v5'...A new InnoDB Cluster will be created on instance '127.0.0.1:3307'.

Validating instance configuration at localhost:3307...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3307

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3307'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

Creating InnoDB Cluster 'my-cluster-db-v5' on '127.0.0.1:3307'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

✅ Cluster 'my-cluster-db-v5' criado com sucesso
ℹ️  Aguardando estabilização do cluster primário...
⏳ Aguardando 30 segundos...
ℹ️  Status do cluster: OK_NO_TOLERANCE✅ Cluster primário está funcionando corretamente

================================================================================PHASE 4: ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS AO CLUSTER================================================================================
ℹ️  Adicionando instância 3310 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3310...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3310'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3310 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3310 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3310 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

State recovery already finished for '127.0.0.1:3310'

The instance '127.0.0.1:3310' was successfully added to the cluster.

✅ Instância 3310 adicionada ao cluster (1/3)
ℹ️  Aguardando sincronização da instância 3310...
⏳ Aguardando 20 segundos...
✅ Instância 3310 está ONLINE no cluster
ℹ️  Adicionando instância 3320 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3320...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3320'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3320 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3320 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.82 MB transferred in about 1 second (~73.82 MB/s)

State recovery already finished for '127.0.0.1:3320'

The instance '127.0.0.1:3320' was successfully added to the cluster.

✅ Instância 3320 adicionada ao cluster (2/3)
ℹ️  Aguardando sincronização da instância 3320...
⏳ Aguardando 20 segundos...
✅ Instância 3320 está ONLINE no cluster
ℹ️  Adicionando instância 3330 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3330...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3330

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3330'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3330 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3330 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3330 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

State recovery already finished for '127.0.0.1:3330'

The instance '127.0.0.1:3330' was successfully added to the cluster.

✅ Instância 3330 adicionada ao cluster (3/3)
ℹ️  Aguardando sincronização da instância 3330...
⏳ Aguardando 20 segundos...
✅ Instância 3330 está ONLINE no cluster
ℹ️  Total de instâncias secundárias adicionadas: 3/3ℹ️  Aguardando sincronização completa do cluster...
⏳ Aguardando 15 segundos...
ℹ️  Verificando status do cluster após adição de instâncias...ℹ️  Total de nós no cluster: 4

================================================================================PHASE 5: CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS================================================================================
Setting the value of 'memberWeight' to '100' in the instance: '127.0.0.1:3307' ...

Successfully set the value of 'memberWeight' to '100' in the cluster member: '127.0.0.1:3307'.
✅ Peso 100 configurado para instância 3307Setting the value of 'memberWeight' to '60' in the instance: '127.0.0.1:3310' ...

Successfully set the value of 'memberWeight' to '60' in the cluster member: '127.0.0.1:3310'.
✅ Peso 60 configurado para instância 3310Setting the value of 'memberWeight' to '40' in the instance: '127.0.0.1:3320' ...

Successfully set the value of 'memberWeight' to '40' in the cluster member: '127.0.0.1:3320'.
✅ Peso 40 configurado para instância 3320Setting the value of 'memberWeight' to '20' in the instance: '127.0.0.1:3330' ...

Successfully set the value of 'memberWeight' to '20' in the cluster member: '127.0.0.1:3330'.
✅ Peso 20 configurado para instância 3330✅ Configuração de pesos concluída

================================================================================PHASE 5.5: CRIAÇÃO DE USUÁRIOS DE REPLICAÇÃO================================================================================
ℹ️  Criando usuário de replicação na instância primária (3307)...✅ Usuário de replicação criado com sucesso na instância primária
ℹ️  Aguardando propagação do usuário para os nós secundários...
⏳ Aguardando 5 segundos...
✅ Usuário de replicação confirmado no nó 3310✅ Usuário de replicação confirmado no nó 3320✅ Usuário de replicação confirmado no nó 3330
================================================================================PHASE 6: CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA================================================================================
ℹ️  Verificando nós disponíveis no cluster para réplicas...
ℹ️  Processando réplica 3340 para fonte 3307...
ℹ️  Nó fonte 3307 está ONLINE, criando réplica 3340...
ℹ️  - Criando instância réplica 3340...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3340

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3340 successfully deployed and started.
Use shell.connect('root@localhost:3340') to connect to the instance.

ℹ️  - Configurando instância réplica 3340...Configuring local MySQL instance listening at port 3340 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3340
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3340' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3340 como réplica de leitura anexada ao nó 3307...Setting up '127.0.0.1:3340' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3340...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3340

Instance configuration is suitable.
* Checking transaction state of the instance...


Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3340 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3340 to 127.0.0.1:3307

* Waiting for Read-Replica '127.0.0.1:3340' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3340' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3340 configurada e anexada ao nó 3307 (1/4)
⏳ Aguardando 10 segundos...
ℹ️  Processando réplica 3350 para fonte 3310...
ℹ️  Nó fonte 3310 está ONLINE, criando réplica 3350...
ℹ️  - Criando instância réplica 3350...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3350

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3350 successfully deployed and started.
Use shell.connect('root@localhost:3350') to connect to the instance.

ℹ️  - Configurando instância réplica 3350...Configuring local MySQL instance listening at port 3350 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3350
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3350' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3310...✅ Super-read-only desabilitado temporariamente no nó 3310⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3350 como réplica de leitura anexada ao nó 3310...Setting up '127.0.0.1:3350' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3350...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3350

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3350' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3350' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3350 is being cloned from 127.0.0.1:3310
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3350 to 127.0.0.1:3310

* Waiting for Read-Replica '127.0.0.1:3350' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3350' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3350 configurada e anexada ao nó 3310 (2/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3310...✅ Super-read-only reabilitado no nó 3310
ℹ️  Processando réplica 3360 para fonte 3320...
ℹ️  Nó fonte 3320 está ONLINE, criando réplica 3360...
ℹ️  - Criando instância réplica 3360...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3360

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3360 successfully deployed and started.
Use shell.connect('root@localhost:3360') to connect to the instance.

ℹ️  - Configurando instância réplica 3360...Configuring local MySQL instance listening at port 3360 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3360
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3360' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3320...✅ Super-read-only desabilitado temporariamente no nó 3320⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3360 como réplica de leitura anexada ao nó 3320...Setting up '127.0.0.1:3360' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3360...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3360

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3360' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3360' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3360 is being cloned from 127.0.0.1:3320
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.87 MB transferred in about 1 second (~74.87 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3360 to 127.0.0.1:3320

* Waiting for Read-Replica '127.0.0.1:3360' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3360' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3360 configurada e anexada ao nó 3320 (3/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3320...✅ Super-read-only reabilitado no nó 3320
ℹ️  Processando réplica 3370 para fonte 3330...
ℹ️  Nó fonte 3330 está ONLINE, criando réplica 3370...
ℹ️  - Criando instância réplica 3370...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3370

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3370 successfully deployed and started.
Use shell.connect('root@localhost:3370') to connect to the instance.

ℹ️  - Configurando instância réplica 3370...Configuring local MySQL instance listening at port 3370 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3370
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3370' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3330...✅ Super-read-only desabilitado temporariamente no nó 3330⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3370 como réplica de leitura anexada ao nó 3330...Setting up '127.0.0.1:3370' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3370...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3370

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3370' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3370' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3370 is being cloned from 127.0.0.1:3330
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3370 to 127.0.0.1:3330

* Waiting for Read-Replica '127.0.0.1:3370' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3370' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3370 configurada e anexada ao nó 3330 (4/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3330...✅ Super-read-only reabilitado no nó 3330
ℹ️  Verificando configuração final de super-read-only...
ℹ️  Super-read-only já está habilitado no nó 3310ℹ️  Super-read-only já está habilitado no nó 3320ℹ️  Super-read-only já está habilitado no nó 3330
================================================================================PHASE 7: VERIFICAÇÃO FINAL E STATUS================================================================================
ℹ️  Aguardando estabilização final...
⏳ Aguardando 15 segundos...

📊 STATUS COMPLETO DO CLUSTER:=======================================================================
{
  "clusterName": "my-cluster-db-v5",
  "defaultReplicaSet": {
    "GRProtocolVersion": "8.0.27",
    "communicationStack": "MYSQL",
    "groupName": "650a7be8-9275-11f0-8693-735b6f1b3cd9",
    "groupViewChangeUuid": "AUTOMATIC",
    "groupViewId": "17579693355065660:10",
    "name": "default",
    "paxosSingleLeader": "OFF",
    "primary": "127.0.0.1:3307",
    "ssl": "REQUIRED",
    "status": "OK",
    "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
    "topology": {
      "127.0.0.1:3307": {
        "address": "127.0.0.1:3307",
        "applierWorkerThreads": 4,
        "fenceSysVars": [],
        "memberId": "499c9602-9275-11f0-b1ea-d038aaac61de",
        "memberRole": "PRIMARY",
        "memberState": "ONLINE",
        "mode": "R/W",
        "readReplicas": {
          "Replica_Primary_3307": {
            "address": "127.0.0.1:3340",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "127.0.0.1:3307"
            ],
            "replicationSsl": "TLS_AES_128_GCM_SHA256 TLSv1.3",
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "8.4.3"
          }
        },
        "replicationLag": "applier_queue_applied",
        "role": "HA",
        "status": "ONLINE",
        "version": "8.4.3"
      },
      "127.0.0.1:3310": {
        "address": "127.0.0.1:3310",
        "applierWorkerThreads": 4,
        "fenceSysVars": [
          "read_only",
          "super_read_only"
        ],
        "memberId": "51138f80-9275-11f0-b352-d6511470f888",
        "memberRole": "SECONDARY",
        "memberState": "ONLINE",
        "mode": "R/O",
        "readReplicas": {
          "Replica_Secondary_3310": {
            "address": "127.0.0.1:3350",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "127.0.0.1:3310"
            ],
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "8.4.3"
          }
        },
        "replicationLag": "applier_queue_applied",
        "role": "HA",
        "status": "ONLINE",
        "version": "8.4.3"
      },
      "127.0.0.1:3320": {
        "address": "127.0.0.1:3320",
        "applierWorkerThreads": 4,
        "fenceSysVars": [
          "read_only",
          "super_read_only"
        ],
        "memberId": "5749c8e2-9275-11f0-a8ca-524150cc11ed",
        "memberRole": "SECONDARY",
        "memberState": "ONLINE",
        "mode": "R/O",
        "readReplicas": {
          "Replica_Tertiary_3320": {
            "address": "127.0.0.1:3360",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "127.0.0.1:3320"
            ],
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "8.4.3"
          }
        },
        "replicationLag": "applier_queue_applied",
        "role": "HA",
        "status": "ONLINE",
        "version": "8.4.3"
      },
      "127.0.0.1:3330": {
        "address": "127.0.0.1:3330",
        "applierWorkerThreads": 4,
        "fenceSysVars": [
          "read_only",
          "super_read_only"
        ],
        "memberId": "5daecdfe-9275-11f0-8a3f-de7098aa8dc1",
        "memberRole": "SECONDARY",
        "memberState": "ONLINE",
        "mode": "R/O",
        "readReplicas": {
          "Replica_Quaternary_3330": {
            "address": "127.0.0.1:3370",
            "applierStatus": "APPLIED_ALL",
            "applierThreadState": "Waiting for an event from Coordinator",
            "applierWorkerThreads": 4,
            "receiverStatus": "ON",
            "receiverThreadState": "Waiting for source to send event",
            "replicationLag": "applier_queue_applied",
            "replicationSources": [
              "127.0.0.1:3330"
            ],
            "role": "READ_REPLICA",
            "status": "ONLINE",
            "version": "8.4.3"
          }
        },
        "replicationLag": "applier_queue_applied",
        "role": "HA",
        "status": "ONLINE",
        "version": "8.4.3"
      }
    },
    "topologyMode": "Single-Primary"
  },
  "groupInformationSourceMember": "127.0.0.1:3307",
  "metadataVersion": "2.3.0"
}
🎯 ANÁLISE DO STATUS:• Status Geral: OK• Modo: Single-Primary• SSL Mode: REQUIRED
📊 RESUMO POR STATUS:• ONLINE: 4 instância(s)
📈 ESTATÍSTICAS DO CLUSTER:• Nós ONLINE no cluster: 4• Total de réplicas de leitura: 4• Tolerância a falhas: SIM
📚 RÉPLICAS DE LEITURA ANEXADAS:  • 127.0.0.1:3307 → Replica_Primary_3307 (ONLINE)  • 127.0.0.1:3310 → Replica_Secondary_3310 (ONLINE)  • 127.0.0.1:3320 → Replica_Tertiary_3320 (ONLINE)  • 127.0.0.1:3330 → Replica_Quaternary_3330 (ONLINE)
🔗 TESTE DE CONECTIVIDADE:=======================================================================
✅ Porta 3307: Conectividade OK - Server ID: 4288433930✅ Porta 3310: Conectividade OK - Server ID: 646259890✅ Porta 3320: Conectividade OK - Server ID: 3535963276✅ Porta 3330: Conectividade OK - Server ID: 1379475883
🔗 TESTE DE CONECTIVIDADE DAS RÉPLICAS:=======================================================================
✅ Réplica 3340: Conectividade OK - Server ID: 1064444721✅ Réplica 3350: Conectividade OK - Server ID: 2777852472✅ Réplica 3360: Conectividade OK - Server ID: 304947572✅ Réplica 3370: Conectividade OK - Server ID: 2936623678

================================================================================🎉 CONFIGURAÇÃO CONCLUÍDA COM SUCESSO! 🎉================================================================================
📋 RESUMO DA CONFIGURAÇÃO:----------------------------------------------------------------------• Cluster Name: my-cluster-db-v5• Instâncias Primárias: 4 (3307, 3310, 3320, 3330)• Réplicas de Leitura: 4 (3340, 3350, 3360, 3370)• Total de Instâncias: 8• Arquitetura: 4-Node Cluster + 4 Read Replicas (1:1)
🔗 MAPEAMENTO DE RÉPLICAS:----------------------------------------------------------------------• Nó 3307 → Réplica 3340 (Replica_Primary_3307)• Nó 3310 → Réplica 3350 (Replica_Secondary_3310)• Nó 3320 → Réplica 3360 (Replica_Tertiary_3320)• Nó 3330 → Réplica 3370 (Replica_Quaternary_3330)
⚖️  PESOS CONFIGURADOS:----------------------------------------------------------------------• Porta 3307: Peso 100• Porta 3310: Peso 60• Porta 3320: Peso 40• Porta 3330: Peso 20
🚀 PRÓXIMOS PASSOS:----------------------------------------------------------------------• Configurar MySQL Router para balanceamento de carga• Implementar monitoramento e alertas• Configurar backups automatizados• Testar failover e recuperação• Ajustar configurações de performance conforme necessário
💡 COMANDOS ÚTEIS:----------------------------------------------------------------------• Status do cluster: cluster.status({extended: true})• Conectar ao cluster: shell.connect('root@localhost:3307')• Obter cluster: dba.getCluster('my-cluster-db-v5')• Rescan do cluster: cluster.rescan()• Verificar réplicas: cluster.listRouters()
📋 COMANDOS PARA MONITORAMENTO (macOS/Linux):----------------------------------------------------------------------# Monitorar log em tempo real:tail -f /tmp/cluster_setup.log# Verificar portas em uso:lsof -i -P | grep LISTEN | grep :33# Verificar processos MySQL:ps aux | grep mysql
================================================================================✅ ✨ Script executado com sucesso! ✨================================================================================
┌[acaciolr☮MacBook-Pro-de-Acacio.local]-(~/Library/Mobile Documents/com~apple~CloudDocs/DBA/DBA Scripts/MySQL)

Eu resolvi compartilhar a ideia pra não deixa no /dev/null, mas existem melhorias a serem aplicadas, conforme for ajustando eu vou atualizando o script aqui, caso tenham melhorias ou sugestões podem publicar nos comentários ou me chamar no nas redes que podemos trocar ideias e bater figurinhas hahaha.

Abraço.