<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>MySQL &#8211; Furushima</title>
	<atom:link href="https://furushima.com.br/blog/category/mysql/feed/" rel="self" type="application/rss+xml" />
	<link>https://furushima.com.br</link>
	<description>- Consultoria de Banco de Dados &#124; Furushima</description>
	<lastBuildDate>Sat, 27 Sep 2025 16:47:19 +0000</lastBuildDate>
	<language>pt-BR</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Transportable Tablespace no MySQL &#8211; O Guia Brabo</title>
		<link>https://furushima.com.br/blog/transportable-tablespace-no-mysql-o-guia-brabo/</link>
		
		<dc:creator><![CDATA[Acacio Lima Rocha]]></dc:creator>
		<pubDate>Sat, 27 Sep 2025 16:45:54 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://furushima.com.br/?p=2903</guid>

					<description><![CDATA[Fala rapaziada, só na boa, na moral, no esquema? espero que sim. Hoje, mais um artigo sobre o danado do bando de dados do golfinho. Vou falar sobre a praticidade do TTS, sim TTS no MySQL (Assim como no Oracle, também existe no MySQL). ⚠️ CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Fala rapaziada, só na boa, na moral, no esquema? espero que sim.</p>



<p>Hoje, mais um artigo sobre o danado do bando de dados do golfinho.<br><br>Vou falar sobre a praticidade do TTS, sim TTS no MySQL (Assim como no Oracle, também existe no MySQL).</p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-97f38820eb6ad89bfb13592df938e187"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM (SE SOUBER USAR <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f92d.png" alt="🤭" class="wp-smiley" style="height: 1em; max-height: 1em;" />)<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /></strong></p>



<p>Depois de bater cabeça com a bagaceira acima, bora la.</p>



<h2 class="wp-block-heading"><strong>1. Pré-Requisitos</strong></h2>



<ol class="wp-block-list">
<li>Verificar e ativar a configuração necessária para o Transportable Tablespace funcionar corretamente.</li>



<li>Ter um MySQL <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f913.png" alt="🤓" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li>



<li>Ter lido meu artigo anterior sobre <a href="https://acaciolrdba.wordpress.com/2025/03/09/tablespace-no-mysql-%f0%9f%90%ac/" target="_blank" rel="noreferrer noopener">TABLESPACES NO MYSQL</a></li>
</ol>



<p>Esse parça verifica se o InnoDB está configurado para criar um arquivo .ibd separado para cada tabela, o que é essencial para o TTS.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
SHOW VARIABLES LIKE 'innodb_file_per_table';

+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | ON    |
+-----------------------+-------+
</pre></div>


<p><strong>Se precisar ativar:</strong></p>



<p>Ativa o modo file-per-table, permitindo que cada tabela InnoDB tenha seu próprio arquivo de tablespace.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
SET GLOBAL innodb_file_per_table=ON;

Query OK, 0 rows affected (0.01 sec)
</pre></div>


<h2 class="wp-block-heading"><strong>2. Exemplo do esquema</strong></h2>



<h3 class="wp-block-heading"><strong>Na caixa de origem:</strong></h3>



<p>Crie um novo banco de BRABOS (kkkk, foi boa vai) para nossos testes:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE DATABASE db_brabo;

Query OK, 1 row affected (0.01 sec)
</pre></div>


<p>Selecione o nosso DBzinho recém-criado:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
USE db_brabo;

Database changed
</pre></div>


<p>Então criamos uma tabela para poder simular que temos dados em nosso ambiente super, mega, master, blaster produtivo:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLE clientes_brabo (
    id INT AUTO_INCREMENT PRIMARY KEY,
    nome VARCHAR(100) NOT NULL,
    email VARCHAR(100) UNIQUE,
    data_cadastro DATETIME DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB;

Query OK, 0 rows affected (0.01 sec)
</pre></div>


<p>Vamos adicionar alguns dados de leve:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
INSERT INTO clientes_brabo (nome, email) VALUES 
('Zé Brabo', 'ze@brabo.com'), -- Essa vai pro Marião
('DBA Master', 'dba@brabo.com');

Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
</pre></div>


<h2 class="wp-block-heading"><strong>3. Processo de Exportação</strong></h2>



<h3 class="wp-block-heading"><strong>Passo 1: Preparação para Exportação</strong></h3>



<p>Esse carinha é&nbsp;<strong>poderoso</strong>&nbsp;e tem um papel crucial quando você quer&nbsp;<strong>mover ou migrar tabelas</strong>&nbsp;no MySQL usando o&nbsp;TTS:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
FLUSH TABLES clientes_brabo FOR EXPORT;

Query OK, 0 rows affected (0.00 sec)
</pre></div>


<p>O que ele faz?</p>



<ol class="wp-block-list">
<li>Trava a tabela para escrita</li>



<li>Finaliza todas as transações pendentes</li>



<li>Cria um arquivo .cfg com os metadados da tabela</li>
</ol>



<h3 class="wp-block-heading"><strong>Passo 2: Localização dos Arquivos</strong></h3>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
ls -l /var/lib/mysql/db_brabo/

-rw-r----- 1 mysql mysql  12345 Jun 10 15:30 clientes_brabo.ibd
-rw-r----- 1 mysql mysql   1024 Jun 10 15:30 clientes_brabo.cfg
</pre></div>


<p>De quais arquivos estamos falando?</p>



<ul class="wp-block-list">
<li>.ibd (arquivo de dados)</li>



<li>.cfg (metadados criados pelo FLUSH TABLES)</li>
</ul>



<h3 class="wp-block-heading"><strong>Passo 3: Cópia de segurança dos arquivos</strong> (se vc é DBA vai entender) </h3>



<p>Copia os arquivos para um local seguro antes de continuar (Não seja juvenil, inferno):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
cp /var/lib/mysql/db_brabo/clientes_brabo.{ibd,cfg} /backup/
</pre></div>


<h3 class="wp-block-heading"><strong>Passo 4: Liberação da Tabela</strong></h3>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
UNLOCK TABLES;

Query OK, 0 rows affected (0.00 sec)
</pre></div>


<p>Libera a tabela no servidor de origem após a cópia dos arquivos.</p>



<h2 class="wp-block-heading"><strong>4. Processo de Importação</strong></h2>



<h3 class="wp-block-heading"><strong>Passo 1: Preparação do Ambiente de Destino</strong></h3>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE DATABASE db_brabo_destino;
USE db_brabo_destino;
</pre></div>


<p>Cria e seleciona o banco de dados de destino.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLE clientes_brabo (
    id INT AUTO_INCREMENT PRIMARY KEY,
    nome VARCHAR(100) NOT NULL,
    email VARCHAR(100) UNIQUE,
    data_cadastro DATETIME DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB;

Query OK, 0 rows affected (0.00 sec)

</pre></div>


<p>Recria a mesma estrutura da tabela original, sem dados.</p>



<h3 class="wp-block-heading"><strong>Passo 2: Limpeza do Tablespace Existente</strong></h3>



<p>Remove a tbs vazio recém-criado para preparar a importação.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLE clientes_brabo DISCARD TABLESPACE;

Query OK, 0 rows affected (0.00 sec)
</pre></div>


<h3 class="wp-block-heading"><strong>Passo 3: Transferência dos Arquivos</strong></h3>



<p>Copia os arquivos para o diretório do MySQL e ajusta as permissões.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
cp /backup/clientes_brabo.{ibd,cfg} /var/lib/mysql/db_brabo_destino/
chown mysql:mysql /var/lib/mysql/db_brabo_destino/clientes_brabo.*
</pre></div>


<h3 class="wp-block-heading"><strong>Passo 4: Importação Final</strong></h3>



<p>Este comando crucial, se liga:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLE clientes_brabo IMPORT TABLESPACE;

Query OK, 0 rows affected (0.00 sec)
</pre></div>


<p>Sabe o motivo? sei que não <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f923.png" alt="🤣" class="wp-smiley" style="height: 1em; max-height: 1em;" />, toma:</p>



<ol class="wp-block-list">
<li>Lê o arquivo .cfg para validar a estrutura</li>



<li>Importa os dados do arquivo .ibd</li>



<li>Reconstroi os índices</li>
</ol>



<h3 class="wp-block-heading"><strong>Passo 5: Verificação</strong></h3>



<p>Bora então checar os dados:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
SELECT * FROM clientes_brabo;

+----+------------+---------------+---------------------+
| id | nome       | email         | data_cadastro       |
+----+------------+---------------+---------------------+
|  1 | Zé Brabo   | ze@brabo.com  | 2023-06-10 15:30:00 |
|  2 | DBA Master | dba@brabo.com | 2023-06-10 15:30:00 |
+----+------------+---------------+---------------------+
</pre></div>


<h2 class="wp-block-heading"><strong>5. Validação</strong> final do esquema</h2>



<p>Bora validar a integridade da tabela:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CHECK TABLE clientes_brabo;

+-------------------+-------+----------+----------+
| Table             | Op    | Msg_type | Msg_text |
+-------------------+-------+----------+----------+
| db_brabo.clientes | check | status   | OK       |
+-------------------+-------+----------+----------+
</pre></div>


<p><strong>Como boas práticas:</strong></p>



<ol class="wp-block-list">
<li>Sempre verifique a compatibilidade de versões entre as caixas/servers/maquinas etc.</li>



<li>Para tabelas grandes, considere compactar os arquivos durante a transferência</li>



<li>Mantenha backups dos arquivos .ibd e .cfg até confirmar que tudo ocorreu suave na nave.</li>
</ol>



<h2 class="wp-block-heading">Minhas <strong>considerações</strong>: </h2>



<p>Este processo é extremamente útil para:</p>



<ul class="wp-block-list">
<li>Migrações rápidas entre ambientes</li>



<li>Recuperação de tabelas específicas</li>



<li>Criação de ambientes de teste com dados reais</li>
</ul>



<p>Sacou qualé da parada? agora pega teu lab e senta o dedo NELE.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Ficou ruim? Pior que o backup sem binlog? (PQP) <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f602.png" alt="😂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
</blockquote>



<p><em>(Brincadeira, tmj! Qualquer coisa, só chamar os BRABO!)</em></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tablespace no MySQL 🐬</title>
		<link>https://furushima.com.br/blog/tablespace-no-mysql-%f0%9f%90%ac/</link>
		
		<dc:creator><![CDATA[Acacio Lima Rocha]]></dc:creator>
		<pubDate>Sat, 27 Sep 2025 16:41:52 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://furushima.com.br/?p=2900</guid>

					<description><![CDATA[Fala rapaziada, so na paz? espero que sim. Hoje vou abordar um assunto simples e pratico que assim como no Oracle, também existe no MySQL, TABLESPACES. ⚠️ CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM (SE SOUBER USAR 🤭)⚠️ Essa é nova pra você? ou ja sabia que no MySQL também existe TABLESPACES? [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Fala rapaziada, so na paz? espero que sim.</p>



<p>Hoje vou abordar um assunto simples e pratico que assim como no Oracle, também existe no MySQL, <strong>TABLESPACES</strong>.</p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-97f38820eb6ad89bfb13592df938e187"><strong>⚠️ CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM (SE SOUBER USAR 🤭)⚠️</strong></p>



<p>Essa é nova pra você? ou ja sabia que no MySQL também existe <strong>TABLESPACES</strong>? </p>



<p>Bom, se é novidade ou não vale dar uma lida para aprender ou relembrar detalhes sobre o assunto.</p>



<p>No MySQL, existem três tipos principais de tablespaces:</p>



<ol start="1" class="wp-block-list">
<li><strong>System Tablespace</strong>: Armazena metadados e dados do sistema.</li>



<li><strong>File-per-Table Tablespace</strong>: Cada tabela é armazenada em seu próprio arquivo&nbsp;<code>.ibd</code> (O que eu acho zoado por ser DBA Oracle kkk).</li>



<li><strong>General Tablespace</strong>: Permite que múltiplas tabelas compartilhem o mesmo arquivo&nbsp;de dados<code> .ibd</code>.</li>
</ol>



<h3 class="wp-block-heading"><strong>Por que usar Tablespaces no MySQL?</strong></h3>



<p>O uso de tablespaces oferece várias vantagens para o DBA desorganizado 🤭:</p>



<ul class="wp-block-list">
<li><strong>Flexibilidade</strong>: Você pode escolher onde armazenar os dados (em um diretório específico, por exemplo).</li>



<li><strong>Desempenho</strong>: Tablespaces permitem otimizar o armazenamento para cenários específicos, como compressão de dados.</li>



<li><strong>Gerenciamento Simplificado</strong>: Facilita a administração de grandes volumes de dados, especialmente em ambientes com múltiplas tabelas.</li>



<li><strong>Compressão</strong>: Tablespaces suportam tabelas comprimidas, reduzindo o uso de espaço em disco.</li>
</ul>



<h3 class="wp-block-heading"><strong>Tipos de Tablespaces no MySQL</strong></h3>



<h4 class="wp-block-heading">1.<strong> System Tablespace</strong></h4>



<p>A System tablespace é o coração do MySQL. Ele armazena metadados do sistema, como o dicionário de dados e os logs de undo. Por padrão, ele é armazenado no arquivo&nbsp;<code>ibdata1</code>. Embora seja essencial para o funcionamento do MySQL, ele não é recomendado para armazenar dados de usuário, pois pode crescer indefinidamente (sim, tem doido pra tudo).</p>



<h4 class="wp-block-heading">2.&nbsp;<strong>File-per-Table Tablespace</strong></h4>



<p>No modo <strong>file-per-table</strong>, cada tabela é armazenada em seu próprio arquivo&nbsp;<code>.ibd</code>. Isso oferece maior flexibilidade, pois você pode mover, copiar ou excluir tabelas individualmente. Além disso, facilita a recuperação de dados em caso de falhas.</p>



<p>Exemplo de criação de uma tabela no modo file-per-table:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLE tabela_braba (
    cfp INT PRIMARY KEY,
    dba VARCHAR(100)
) ENGINE=InnoDB;
</pre></div>


<h4 class="wp-block-heading">3.&nbsp;<strong>General Tablespace</strong></h4>



<p>General tablespaces permitem que múltiplas tabelas compartilhem o mesmo arquivo&nbsp;<code>.ibd</code>. Eles são ideais para cenários onde você deseja centralizar o armazenamento de várias tabelas ou utilizar compressão de dados.</p>



<p>Suporte aos seguintes row format:</p>



<ul class="wp-block-list">
<li><code>REDUNDANT</code></li>



<li><code>COMPACT</code></li>



<li><code>DYNAMIC</code></li>



<li><code>COMPRESSED</code></li>
</ul>



<p>Exemplo de criação de um general tablespace:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLESPACE tablespace_braba
ADD DATAFILE '/u01/para/df_brabo_01.ibd'
ENGINE=InnoDB;
</pre></div>


<p>Ou você também pode apenas executar:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLESPACE tablespace_braba Engine=InnoDB;
</pre></div>


<p>Se a cláusula <strong>ADD DATAFILE</strong> não for especificada ao criar uma tablespace, um datafile com um nome muito louco será criado no lugar (nessa pegada: <em><code><strong>aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee</strong></code></em>).</p>



<p>E para adicionar uma tabela a nova tablespace?</p>



<p>Você pode apenas criar a tabela atribuindo-a sua tablespace:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
CREATE TABLE tabela_braba (
    cfp INT PRIMARY KEY,
    dba VARCHAR(100)
) TABLESPACE tablespace_braba;
</pre></div>


<p>Mas você também pode vincular uma tabela existente a uma tablespace:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLE tabela_braba TABLESPACE tablespace_braba;
</pre></div>


<p>Podemos validar, verificar e entender a relação tabelas por tablespaces com a consulta:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
SELECT a.NAME AS space_name, b.NAME AS table_name FROM INFORMATION_SCHEMA.INNODB_TABLESPACES a,
       INFORMATION_SCHEMA.INNODB_TABLES b WHERE a.SPACE=b.SPACE AND a.NAME LIKE 'tablespace_braba';
</pre></div>

<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
+-----------------+--------------------+
| space_name      | table_name         |
+-----------------+--------------------+
| tablespace_braba| teste/tabela_braba |
+-----------------+--------------------+
</pre></div>


<p>Também como no Oracle é possivel adicionar datafiles a essas tablespaces criadas:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLESPACE tablespace_braba ADD DATAFILE '/u01/oradata/df_brabo_02.ibd' INITIAL_SIZE 48M ENGINE InnoDB;
</pre></div>


<p>Nesse caso acima ja estamos pre-alocando 48M para o arquivo de datafile, mas podemos também adicionar datafile sem alocar espaço previamente, assim:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLESPACE tablespace_braba ADD DATAFILE '/u01/oradata/df_brabo_03.ibd' ENGINE InnoDB;
</pre></div>


<p>Ainda é possivel também realizar o MOVE das tabelas entre diferentes tipo de tablespaces:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLE tabela_braba TABLESPACE &#x5B;=] tablespace_name;
ALTER TABLE tabela_braba TABLESPACE &#x5B;=] innodb_system;
ALTER TABLE tabela_braba TABLESPACE &#x5B;=] innodb_file_per_table;
</pre></div>


<p>Rename pode? pode!</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
ALTER TABLESPACE tablespace_braba RENAME TO tablespace_braba_01;
</pre></div>


<p class="has-red-color has-text-color has-link-color wp-elements-88ae3e7d99025178983d77c5bf3715e6">Brincadeira tem hora: Durante o rename de uma tablespace, todas as tabelas relacionadas a ela sofre lock de metadados 🚨.</p>



<p class="has-red-color has-text-color has-link-color wp-elements-bcfaedfd953d27d6e5c3cd5674542aaa">Também não é possivel realizar o RENAME enquanto as tabelas estiverem com: LOCK TABLES ou FLUSH TABLES WITH READ LOCK.</p>



<p>E por fim, dropar uma tablespace no MySQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
DROP TABLESPACE tablespace_braba;
</pre></div>


<h3 class="wp-block-heading"><strong>Vantagens dos General Tablespaces</strong></h3>



<ul class="wp-block-list">
<li><strong>Armazenamento Centralizado</strong>: Múltiplas tabelas podem compartilhar o mesmo arquivo, simplificando o gerenciamento.</li>



<li><strong>Compressão de Dados</strong>: Suporta tabelas com row format&nbsp;<code>COMPRESSED</code>, reduzindo o uso de espaço em disco.</li>



<li><strong>Eficiência</strong>: Ideal para cenários onde várias tabelas têm padrões de acesso semelhantes.</li>
</ul>



<h2 class="wp-block-heading">Combinações permitidas de tamanho de página para tabelas compactadas</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>InnoDB Page Size</th><th>FILE_BLOCK_SIZE Value</th><th>KEY_BLOCK_SIZE Value</th></tr></thead><tbody><tr><th>64KB</th><td>64K (65536)</td><td>Compression is not supported</td></tr><tr><th>32KB</th><td>32K (32768)</td><td>Compression is not supported</td></tr><tr><th>16KB</th><td>16K (16384)</td><td>None. If&nbsp;<a href="https://dev.mysql.com/doc/refman/8.4/en/innodb-parameters.html#sysvar_innodb_page_size"><code>innodb_page_size</code></a>&nbsp;is equal to&nbsp;<code>FILE_BLOCK_SIZE</code>, the tablespace cannot contain a compressed table.</td></tr><tr><th>16KB</th><td>8K (8192)</td><td>8</td></tr><tr><th>16KB</th><td>4K (4096)</td><td>4</td></tr><tr><th>16KB</th><td>2K (2048)</td><td>2</td></tr><tr><th>16KB</th><td>1K (1024)</td><td>1</td></tr><tr><th>8KB</th><td>8K (8192)</td><td>None. If&nbsp;<a href="https://dev.mysql.com/doc/refman/8.4/en/innodb-parameters.html#sysvar_innodb_page_size"><code>innodb_page_size</code></a>&nbsp;is equal to&nbsp;<code>FILE_BLOCK_SIZE</code>, the tablespace cannot contain a compressed table.</td></tr><tr><th>8KB</th><td>4K (4096)</td><td>4</td></tr><tr><th>8KB</th><td>2K (2048)</td><td>2</td></tr><tr><th>8KB</th><td>1K (1024)</td><td>1</td></tr><tr><th>4KB</th><td>4K (4096)</td><td>None. If&nbsp;<a href="https://dev.mysql.com/doc/refman/8.4/en/innodb-parameters.html#sysvar_innodb_page_size"><code>innodb_page_size</code></a>&nbsp;is equal to&nbsp;<code>FILE_BLOCK_SIZE</code>, the tablespace cannot contain a compressed table.</td></tr><tr><th>4KB</th><td>2K (2048)</td><td>2</td></tr><tr><th>4KB</th><td>1K (1024)</td><td>1</td></tr></tbody></table><figcaption class="wp-element-caption"><strong>MySQL 8.4 Reference Manual &#8211; 17.6.3.3&nbsp;General Tablespaces</strong></figcaption></figure>



<h3 class="wp-block-heading"><strong>Quando usar Tablespaces no golfinho?</strong></h3>



<ul class="wp-block-list">
<li><strong>Cenários de Compressão</strong>: Se você precisa economizar espaço em disco, tablespaces com compressão são uma ótima opção.</li>



<li><strong>Armazenamento Centralizado</strong>: Para ambientes com múltiplas tabelas relacionadas, general tablespaces podem simplificar o gerenciamento.</li>



<li><strong>Controle de Localização</strong>: Se você precisa armazenar dados em um diretório específico (por exemplo, em um disco de alta performance [SSD, NVMe, Optane etc e tal]), tablespaces permitem esse controle.</li>
</ul>



<p class="has-medium-font-size"><strong>REFERENCE GUIDE:</strong></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
-- ============================================================================
-- COMANDOS CREATE TABLESPACE MAIS COMPLETOS POR ENGINE
-- ============================================================================

-- ============================================================================
-- 1. InnoDB ENGINE - TABLESPACE REGULAR
-- ============================================================================
CREATE TABLESPACE ts_innodb_complete
ADD DATAFILE '/var/lib/mysql/ts_innodb_complete.ibd'
AUTOEXTEND_SIZE = 64M
FILE_BLOCK_SIZE = 16384
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- ============================================================================
-- 2. InnoDB ENGINE - UNDO TABLESPACE
-- ============================================================================
CREATE UNDO TABLESPACE undo_ts_innodb_complete
ADD DATAFILE '/var/lib/mysql/undo_ts_innodb_complete.ibu'
AUTOEXTEND_SIZE = 128M
FILE_BLOCK_SIZE = 16384
ENCRYPTION = 'N'
ENGINE = InnoDB;

-- ============================================================================
-- 3. NDB ENGINE - TABLESPACE COMPLETO
-- ============================================================================
-- Primeiro, criar o LOGFILE GROUP (prerequisito para NDB tablespace)
CREATE LOGFILE GROUP lg_ndb_complete
ADD UNDOFILE 'undo_lg_ndb_complete.dat'
INITIAL_SIZE = 128M
UNDO_BUFFER_SIZE = 64M
ENGINE = NDB;

-- Agora criar o TABLESPACE NDB
CREATE TABLESPACE ts_ndb_complete
ADD DATAFILE 'ts_ndb_complete.dat'
USE LOGFILE GROUP lg_ndb_complete
AUTOEXTEND_SIZE = 32M
EXTENT_SIZE = 1M
INITIAL_SIZE = 256M
MAX_SIZE = 2G
NODEGROUP = 0
WAIT
COMMENT = 'Tablespace NDB completo para cluster'
ENGINE = NDB;

-- ============================================================================
-- 4. NDB ENGINE - UNDO TABLESPACE
-- ============================================================================
CREATE UNDO TABLESPACE undo_ts_ndb_complete
ADD DATAFILE 'undo_ts_ndb_complete.dat'
USE LOGFILE GROUP lg_ndb_complete
AUTOEXTEND_SIZE = 16M
EXTENT_SIZE = 512K
INITIAL_SIZE = 128M
MAX_SIZE = 1G
NODEGROUP = 1
WAIT
COMMENT = 'Undo tablespace NDB para transações'
ENGINE = NDB;

-- ============================================================================
-- EXPLICAÇÃO DOS PARÂMETROS
-- ============================================================================

/*
PARÂMETROS COMUNS (InnoDB e NDB):
- ADD DATAFILE: Especifica o arquivo de dados
- AUTOEXTEND_SIZE: Tamanho do incremento automático
- ENGINE: Engine de armazenamento

PARÂMETROS EXCLUSIVOS InnoDB:
- FILE_BLOCK_SIZE: Tamanho do bloco (512, 1024, 2048, 4096, 8192, 16384, 32768, 65536)
- ENCRYPTION: Criptografia ('Y' ou 'N')

PARÂMETROS EXCLUSIVOS NDB:
- USE LOGFILE GROUP: Grupo de log files (obrigatório)
- EXTENT_SIZE: Tamanho da extensão (32K-2G)
- INITIAL_SIZE: Tamanho inicial
- MAX_SIZE: Tamanho máximo
- NODEGROUP: ID do grupo de nós
- WAIT: Aguarda conclusão da operação
- COMMENT: Comentário descritivo

VALORES RECOMENDADOS:
- FILE_BLOCK_SIZE InnoDB: 16384 (padrão)
- EXTENT_SIZE NDB: 1M (padrão)
- AUTOEXTEND_SIZE: 64M para InnoDB, 32M para NDB
*/

-- ============================================================================
-- MYSQL TABLESPACES COM MÚLTIPLOS DATAFILES
-- ============================================================================

/*
- InnoDB: NÃO suporta múltiplos datafiles no CREATE TABLESPACE
- NDB: SIM, suporta múltiplos datafiles nativamente
*/

-- ============================================================================
-- 1. InnoDB - LIMITAÇÃO: APENAS 1 DATAFILE NO CREATE
-- ============================================================================

-- ❌ ISTO NÃO FUNCIONA NO InnoDB:
-- CREATE TABLESPACE ts_innodb
-- ADD DATAFILE 'file1.ibd', 'file2.ibd', 'file3.ibd'  -- ERRO!
-- ENGINE = InnoDB;

-- ✅ InnoDB: Apenas 1 datafile no CREATE
CREATE TABLESPACE ts_innodb_inicial
ADD DATAFILE '/var/lib/mysql/ts_innodb_file1.ibd'
AUTOEXTEND_SIZE = 64M
FILE_BLOCK_SIZE = 16384
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- Para adicionar mais datafiles no InnoDB, você deve usar ALTER depois:
-- NOTA: ALTER TABLESPACE ADD DATAFILE não é suportado no InnoDB!
-- InnoDB usa auto-extend ao invés de múltiplos arquivos

-- ============================================================================
-- 2. NDB - SUPORTE COMPLETO A MÚLTIPLOS DATAFILES
-- ============================================================================

-- Primeiro criar o LOGFILE GROUP
CREATE LOGFILE GROUP lg_multiple
ADD UNDOFILE 'undo_multiple.dat'
INITIAL_SIZE = 128M
UNDO_BUFFER_SIZE = 64M
ENGINE = NDB;

-- ✅ NDB: Múltiplos datafiles são suportados via ALTER
CREATE TABLESPACE ts_ndb_multiple
ADD DATAFILE 'ndb_file1.dat'
USE LOGFILE GROUP lg_multiple
INITIAL_SIZE = 256M
EXTENT_SIZE = 1M
MAX_SIZE = 2G
ENGINE = NDB;

-- Adicionar mais datafiles ao tablespace NDB existente
ALTER TABLESPACE ts_ndb_multiple
ADD DATAFILE 'ndb_file2.dat'
INITIAL_SIZE = 256M
ENGINE = NDB;

ALTER TABLESPACE ts_ndb_multiple
ADD DATAFILE 'ndb_file3.dat'
INITIAL_SIZE = 256M
ENGINE = NDB;

ALTER TABLESPACE ts_ndb_multiple
ADD DATAFILE 'ndb_file4.dat'
INITIAL_SIZE = 256M
ENGINE = NDB;

-- ============================================================================
-- 3. VERIFICAR DATAFILES DE UM TABLESPACE
-- ============================================================================

-- Ver informações dos tablespaces e seus arquivos
SELECT 
    TABLESPACE_NAME,
    FILE_NAME,
    FILE_TYPE,
    TOTAL_EXTENTS,
    EXTENT_SIZE,
    INITIAL_SIZE,
    MAXIMUM_SIZE,
    ENGINE
FROM INFORMATION_SCHEMA.FILES 
WHERE TABLESPACE_NAME = 'ts_ndb_multiple'
ORDER BY FILE_NAME;

-- Para InnoDB, ver general tablespaces
SELECT 
    SPACE,
    NAME,
    FLAG,
    ROW_FORMAT,
    PAGE_SIZE,
    SPACE_TYPE
FROM INFORMATION_SCHEMA.INNODB_TABLESPACES
WHERE NAME = 'ts_innodb_inicial';

-- ============================================================================
-- 4. EXEMPLO PRÁTICO: DISTRIBUÇÃO DE CARGA EM NDB
-- ============================================================================

-- Cenário: Tablespace com 4 datafiles em diferentes discos
CREATE LOGFILE GROUP lg_distributed
ADD UNDOFILE '/disk1/mysql/undo_distributed.dat'
INITIAL_SIZE = 256M
UNDO_BUFFER_SIZE = 128M
ENGINE = NDB;

CREATE TABLESPACE ts_distributed
ADD DATAFILE '/disk2/mysql/data_file1.dat'
USE LOGFILE GROUP lg_distributed
INITIAL_SIZE = 1G
EXTENT_SIZE = 1M
MAX_SIZE = 10G
NODEGROUP = 0
ENGINE = NDB;

-- Adicionar datafiles em diferentes discos para performance
ALTER TABLESPACE ts_distributed
ADD DATAFILE '/disk3/mysql/data_file2.dat'
INITIAL_SIZE = 1G
ENGINE = NDB;

ALTER TABLESPACE ts_distributed
ADD DATAFILE '/disk4/mysql/data_file3.dat'
INITIAL_SIZE = 1G
ENGINE = NDB;

ALTER TABLESPACE ts_distributed
ADD DATAFILE '/disk5/mysql/data_file4.dat'
INITIAL_SIZE = 1G
ENGINE = NDB;

-- ============================================================================
-- 5. ALTERNATIVAS PARA InnoDB
-- ============================================================================

/*
Como InnoDB não suporta múltiplos datafiles por tablespace,
as alternativas são:

1. USAR AUTO-EXTEND (recomendado)
   - O arquivo cresce automaticamente conforme necessário
   - Mais simples de gerenciar

2. MÚLTIPLOS TABLESPACES
   - Criar vários tablespaces com 1 datafile cada
   - Distribuir tabelas entre eles

3. PARTICIONAMENTO
   - Particionar tabelas grandes
   - Cada partição pode estar em tablespace diferente
*/

-- Exemplo: Múltiplos tablespaces InnoDB
CREATE TABLESPACE ts_innodb_part1
ADD DATAFILE '/disk1/mysql/ts_part1.ibd'
AUTOEXTEND_SIZE = 64M
ENGINE = InnoDB;

CREATE TABLESPACE ts_innodb_part2
ADD DATAFILE '/disk2/mysql/ts_part2.ibd'
AUTOEXTEND_SIZE = 64M
ENGINE = InnoDB;

CREATE TABLESPACE ts_innodb_part3
ADD DATAFILE '/disk3/mysql/ts_part3.ibd'
AUTOEXTEND_SIZE = 64M
ENGINE = InnoDB;

-- Tabela particionada usando múltiplos tablespaces
CREATE TABLE vendas_particionada (
    id INT AUTO_INCREMENT,
    data_venda DATE,
    valor DECIMAL(10,2),
    PRIMARY KEY (id, data_venda)
)
PARTITION BY RANGE (YEAR(data_venda)) (
    PARTITION p2023 VALUES LESS THAN (2024) TABLESPACE ts_innodb_part1,
    PARTITION p2024 VALUES LESS THAN (2025) TABLESPACE ts_innodb_part2,
    PARTITION p2025 VALUES LESS THAN (2026) TABLESPACE ts_innodb_part3
);

-- ============================================================================
-- RESUMO FINAL
-- ============================================================================

/*
CAPACIDADES POR ENGINE:

InnoDB:
✗ Não suporta múltiplos datafiles no CREATE TABLESPACE
✗ Não suporta ALTER TABLESPACE ADD DATAFILE
✓ Suporta auto-extend (recomendado)
✓ Alternativa: múltiplos tablespaces + particionamento

NDB:
✓ Suporta múltiplos datafiles via ALTER TABLESPACE
✓ Ideal para distribuição de carga em cluster
✓ Permite adicionar datafiles dinamicamente
✓ Melhor controle sobre localização dos arquivos

RECOMENDAÇÃO:
- InnoDB: Use auto-extend ou particionamento
- NDB: Use múltiplos datafiles conforme necessário
*/

-- ============================================================================
-- COMANDOS ALTER TABLESPACE MAIS COMPLETOS POR ENGINE
-- ============================================================================

-- ============================================================================
-- 1. NDB ENGINE - OPERAÇÕES COM DATAFILES
-- ============================================================================

-- Adicionar datafile ao tablespace NDB
ALTER TABLESPACE ts_ndb_complete
ADD DATAFILE 'ts_ndb_additional_01.dat'
INITIAL_SIZE = 512M
WAIT
ENGINE = NDB;

-- Adicionar múltiplos datafiles sequencialmente
ALTER TABLESPACE ts_ndb_complete
ADD DATAFILE 'ts_ndb_additional_02.dat'
INITIAL_SIZE = 1G
WAIT
ENGINE = NDB;

ALTER TABLESPACE ts_ndb_complete
ADD DATAFILE 'ts_ndb_additional_03.dat'
INITIAL_SIZE = 1G
WAIT
ENGINE = NDB;

-- Remover datafile do tablespace NDB
ALTER TABLESPACE ts_ndb_complete
DROP DATAFILE 'ts_ndb_additional_01.dat'
WAIT
ENGINE = NDB;

-- Renomear tablespace NDB
ALTER TABLESPACE ts_ndb_complete
RENAME TO ts_ndb_renamed
ENGINE = NDB;

-- ============================================================================
-- 2. NDB ENGINE - UNDO TABLESPACE OPERATIONS
-- ============================================================================

-- Adicionar datafile ao UNDO tablespace NDB
ALTER UNDO TABLESPACE undo_ts_ndb_complete
ADD DATAFILE 'undo_ts_ndb_additional.dat'
INITIAL_SIZE = 256M
WAIT
ENGINE = NDB;

-- Remover datafile do UNDO tablespace NDB
ALTER UNDO TABLESPACE undo_ts_ndb_complete
DROP DATAFILE 'undo_ts_ndb_additional.dat'
WAIT
ENGINE = NDB;

-- Renomear UNDO tablespace NDB
ALTER UNDO TABLESPACE undo_ts_ndb_complete
RENAME TO undo_ts_ndb_renamed
ENGINE = NDB;

-- ============================================================================
-- 3. InnoDB ENGINE - ADICIONAR DATAFILES E CONFIGURAÇÕES
-- ============================================================================

-- ✅ FUNCIONA: Adicionar datafile ao tablespace InnoDB
-- (Suportado em versões específicas do MySQL/MariaDB)
ALTER TABLESPACE ts_innodb_complete
ADD DATAFILE '/u01/oradata/df_innodb_02.ibd'
INITIAL_SIZE = 48M
ENGINE = InnoDB;

-- Adicionar múltiplos datafiles ao InnoDB
ALTER TABLESPACE ts_innodb_complete
ADD DATAFILE '/u01/oradata/df_innodb_03.ibd'
INITIAL_SIZE = 64M
ENGINE = InnoDB;

ALTER TABLESPACE ts_innodb_complete
ADD DATAFILE '/u01/oradata/df_innodb_04.ibd'
INITIAL_SIZE = 128M
ENGINE = InnoDB;

-- Alterar AUTOEXTEND_SIZE do tablespace InnoDB
ALTER TABLESPACE ts_innodb_complete
AUTOEXTEND_SIZE = 128M
ENGINE = InnoDB;

-- Ativar criptografia no tablespace InnoDB
ALTER TABLESPACE ts_innodb_complete
ENCRYPTION = 'Y'
AUTOEXTEND_SIZE = 64M
ENGINE = InnoDB;

-- Desativar criptografia no tablespace InnoDB
ALTER TABLESPACE ts_innodb_complete
ENCRYPTION = 'N'
ENGINE = InnoDB;

-- Renomear tablespace InnoDB
ALTER TABLESPACE ts_innodb_complete
RENAME TO ts_innodb_renamed
AUTOEXTEND_SIZE = 64M
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- ============================================================================
-- 4. InnoDB ENGINE - UNDO TABLESPACE STATUS
-- ============================================================================

-- Definir UNDO tablespace como ATIVO
ALTER UNDO TABLESPACE undo_ts_innodb_complete
SET ACTIVE
AUTOEXTEND_SIZE = 256M
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- Definir UNDO tablespace como INATIVO
ALTER UNDO TABLESPACE undo_ts_innodb_complete
SET INACTIVE
AUTOEXTEND_SIZE = 128M
ENCRYPTION = 'N'
ENGINE = InnoDB;

-- Renomear UNDO tablespace InnoDB com todas as opções
ALTER UNDO TABLESPACE undo_ts_innodb_complete
RENAME TO undo_ts_innodb_renamed
SET ACTIVE
AUTOEXTEND_SIZE = 512M
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- ============================================================================
-- 5. OPERAÇÕES COMBINADAS COMPLEXAS
-- ============================================================================

-- NDB: Adicionar datafile com todas opções disponíveis
ALTER TABLESPACE ts_ndb_production
ADD DATAFILE '/data/mysql/ndb/ts_prod_extra_01.dat'
INITIAL_SIZE = 2G
WAIT
RENAME TO ts_ndb_production_v2
ENGINE = NDB;

-- InnoDB: Modificar todas configurações simultaneamente
ALTER TABLESPACE ts_innodb_production
RENAME TO ts_innodb_production_v2
AUTOEXTEND_SIZE = 256M
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- InnoDB UNDO: Configuração completa de produção
ALTER UNDO TABLESPACE undo_production
RENAME TO undo_production_primary
SET ACTIVE
AUTOEXTEND_SIZE = 1G
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- ============================================================================
-- 6. CENÁRIOS PRÁTICOS DE MANUTENÇÃO
-- ============================================================================

-- Cenário 1: Expansão de capacidade NDB
-- Adicionar múltiplos datafiles para aumentar capacidade
ALTER TABLESPACE ts_app_data
ADD DATAFILE '/storage/fast/mysql/app_data_ssd_01.dat'
INITIAL_SIZE = 5G
WAIT
ENGINE = NDB;

ALTER TABLESPACE ts_app_data
ADD DATAFILE '/storage/bulk/mysql/app_data_bulk_01.dat'
INITIAL_SIZE = 10G
WAIT
ENGINE = NDB;

-- Cenário 2: Migração e otimização InnoDB
-- Renomear e otimizar configurações
ALTER TABLESPACE ts_legacy_data
RENAME TO ts_optimized_data
AUTOEXTEND_SIZE = 512M
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- Cenário 3: Gerenciamento de UNDO tablespaces
-- Alternar entre UNDO tablespaces para manutenção
ALTER UNDO TABLESPACE undo_primary
SET INACTIVE
ENGINE = InnoDB;

ALTER UNDO TABLESPACE undo_secondary
SET ACTIVE
AUTOEXTEND_SIZE = 2G
ENCRYPTION = 'Y'
ENGINE = InnoDB;

-- Cenário 4: Rebalanceamento de storage NDB
-- Remover datafiles de storage lento
ALTER TABLESPACE ts_high_performance
DROP DATAFILE 'slow_storage_file.dat'
WAIT
ENGINE = NDB;

-- Adicionar datafiles em storage rápido
ALTER TABLESPACE ts_high_performance
ADD DATAFILE '/nvme/mysql/high_perf_01.dat'
INITIAL_SIZE = 8G
WAIT
ENGINE = NDB;

-- ============================================================================
-- 7. VERIFICAÇÃO E MONITORAMENTO
-- ============================================================================

-- Verificar status dos tablespaces após alterações
SELECT 
    TABLESPACE_NAME,
    FILE_NAME,
    FILE_TYPE,
    TOTAL_EXTENTS,
    FREE_EXTENTS,
    INITIAL_SIZE,
    MAXIMUM_SIZE,
    AUTOEXTEND_SIZE,
    ENGINE
FROM INFORMATION_SCHEMA.FILES 
WHERE ENGINE IN ('NDB', 'InnoDB')
ORDER BY TABLESPACE_NAME, FILE_NAME;

-- Verificar UNDO tablespaces InnoDB
SELECT 
    SPACE,
    NAME,
    STATE,
    SPACE_TYPE
FROM INFORMATION_SCHEMA.INNODB_TABLESPACES
WHERE SPACE_TYPE = 'Undo'
ORDER BY NAME;

-- ============================================================================
-- EXPLICAÇÃO DETALHADA DOS PARÂMETROS
-- ============================================================================

/*
PARÂMETROS POR ENGINE:

NDB EXCLUSIVO:
- ADD DATAFILE: Adiciona novo arquivo de dados
- DROP DATAFILE: Remove arquivo de dados (deve estar vazio)
- INITIAL_SIZE: Tamanho inicial do novo datafile
- WAIT: Aguarda conclusão da operação

InnoDB (DEPENDE DA VERSÃO):
- ADD DATAFILE: ✅ SUPORTADO em versões específicas
- INITIAL_SIZE: Tamanho inicial do novo datafile
- AUTOEXTEND_SIZE: Tamanho do incremento automático
- SET ACTIVE/INACTIVE: Estado do UNDO tablespace
- ENCRYPTION: Ativação/desativação da criptografia

PARÂMETROS COMUNS:
- RENAME TO: Renomeia o tablespace
- ENGINE: Especifica a engine (pode ser omitido)

SUPORTE POR VERSÃO:

MySQL 8.0 (documentação oficial):
❌ ADD DATAFILE não suportado para InnoDB

Versões específicas/MariaDB:
✅ ADD DATAFILE suportado para InnoDB
✅ INITIAL_SIZE funciona com InnoDB

IMPORTANTE: 
- O suporte pode variar entre versões do MySQL/MariaDB
- Sempre teste em ambiente de desenvolvimento primeiro
- Verifique a documentação da sua versão específica

VALORES RECOMENDADOS:
- AUTOEXTEND_SIZE InnoDB: 64M-1G (dependendo do uso)
- INITIAL_SIZE: 48M-1G (dependendo do datafile)
- ENCRYPTION: 'Y' para dados sensíveis
- Sempre usar WAIT em operações NDB críticas
*/
</pre></div>


<p>Bueno, é isso ae cambada, mais uma do golfinho pra vocês, espero que aproveitem e se organizem como bons DBA Oracle, quero dizer, DBA MySQL👀</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>MySQL InnoDB Cluster &#8211; Descomplicado</title>
		<link>https://furushima.com.br/blog/mysql-innodb-cluster-descomplicado/</link>
		
		<dc:creator><![CDATA[Acacio Lima Rocha]]></dc:creator>
		<pubDate>Sat, 27 Sep 2025 16:37:12 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://furushima.com.br/?p=2896</guid>

					<description><![CDATA[Iaeeeeeee cambada, tudo na paz? bora falar mais um cadim do database do golfinho? ⚠️ CONTÉM TEXTO MELHORADO POR AI &#8211; E TA TUDO BEM (SE SOUBER USAR 🤭)⚠️ Neste artigo aqui eu vou falar do setup do InnoDB Cluster (mostrar como faz também) e vou dar alguns detalhes de como administrar, monitorar etc, nada [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Iaeeeeeee cambada, tudo na paz? bora falar mais um cadim do database do golfinho?</p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-6d5b5a8f143d711856e2bac48efce38c"><strong>⚠️ CONTÉM TEXTO MELHORADO POR AI  &#8211; E TA TUDO BEM (SE SOUBER USAR 🤭)⚠️ </strong></p>



<p>Neste artigo aqui eu vou falar do setup do InnoDB Cluster (mostrar como faz também) e vou dar alguns detalhes de como administrar, monitorar etc, nada de outro mundo. E não, não é parecido com o Oracle RAC mas é muito legal quanto.</p>



<h3 class="wp-block-heading">O que é o MySQL InnoDB Cluster?</h3>



<p>O InnoDB Cluster é uma arquitetura de alta disponibilidade nativa do MySQL que combina replicação automagica, failover integrado e escalabilidade da parada toda. Ele utiliza de Group Replication para sincronizar instâncias de banco de dados, garantindo consistência entre os nós e permitindo a distribuição de cargas de trabalho. De novo, não existem nada como o Oracle RAC mesmo 😆 (e ponto final).</p>



<h2 class="wp-block-heading">Bora para a <a href="https://dev.mysql.com/doc/refman/8.4/en/mysql-innodb-cluster-introduction.html" target="_blank" rel="noreferrer noopener">arquitetura</a>:</h2>



<p>A arquitetura de um cluster de alta disponibilidade (High Availability Cluster) do MySQL com Group Replication:</p>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img alt="" decoding="async" src="https://acaciolrdba.wordpress.com/wp-content/uploads/2024/12/image.png?w=394" alt="" class="wp-image-1063" style="width:534px;height:auto"/><figcaption class="wp-element-caption">Imagem tirada da doc oficial</figcaption></figure></div>


<h3 class="wp-block-heading">Componentes:</h3>



<ol class="wp-block-list">
<li><strong>Client App</strong>:
<ul class="wp-block-list">
<li>Representa as aplicações dos usuários que se conectam ao banco de dados.</li>



<li>Utiliza o <strong>MySQL Connector</strong> para comunicação com o cluster via MySQL Router.</li>
</ul>
</li>



<li><strong>MySQL Router</strong>:
<ul class="wp-block-list">
<li>Um middleware que atua como intermediário entre os aplicativos e os servidores MySQL.</li>



<li>Direciona as solicitações de leitura/escrita para o nó primário (Primary Instance R/W) e distribui as leituras entre os nós secundários (Secondary Instances R/O), dependendo da configuração.</li>
</ul>
</li>



<li><strong>MySQL Shell (Cluster Admin)</strong>:
<ul class="wp-block-list">
<li>Uma interface de administração utilizada para gerenciar o cluster.</li>



<li>Faz uso da <strong>MySQL Admin API</strong> para configurar e monitorar o cluster, incluindo a inicialização do Group Replication.</li>
</ul>
</li>



<li><strong>MySQL Servers</strong>:
<ul class="wp-block-list">
<li><strong>Primary Instance R/W</strong>:
<ul class="wp-block-list">
<li>É o nó principal que processa as operações de leitura e escrita.</li>



<li>Participa do <strong>Group Replication</strong>, garantindo que as alterações sejam propagadas para os nós secundários.</li>
</ul>
</li>



<li><strong>Secondary Instances R/O</strong>:
<ul class="wp-block-list">
<li>São nós secundários configurados para replicação em tempo real.</li>



<li>São usados principalmente para operações de leitura, otimizando o desempenho do cluster.</li>
</ul>
</li>
</ul>
</li>
</ol>



<h3 class="wp-block-heading">Funcionamento:</h3>



<ul class="wp-block-list">
<li><strong>Group Replication</strong>:
<ul class="wp-block-list">
<li>Um protocolo de replicação nativa do MySQL usado para sincronizar os dados entre o nó primário e os nós secundários.</li>



<li>Assegura que todas as instâncias estejam atualizadas com as alterações feitas no nó primário.</li>
</ul>
</li>



<li><strong>Alta Disponibilidade</strong>:
<ul class="wp-block-list">
<li>Se o nó primário falhar, um dos nós secundários pode ser promovido automaticamente a nó primário para garantir a continuidade do serviço.</li>



<li>O <strong>MySQL Router</strong> ajusta automaticamente os encaminhamentos para refletir essa mudança.</li>
</ul>
</li>
</ul>



<h2 class="wp-block-heading">Setup do InnoDB Cluster</h2>



<p>Agora, vou mostrar o passo a passo para realizar o setup do cluster.</p>



<p>Topicos do role:</p>



<ul class="wp-block-list">
<li>Topologia</li>



<li>Pre-Requisitos</li>



<li>Instalação do MySQL e seus componentes (MySQL Shell e escambal) </li>



<li>Configuração do my.cnf</li>



<li>Setup do Cluster via MySQL Shell</li>



<li>Setup do MySQL Router</li>



<li>Teste de disponibilidade</li>
</ul>



<h2 class="wp-block-heading">Topologia da parada</h2>



<ul class="wp-block-list">
<li>myorcl1
<ul class="wp-block-list">
<li>192.168.10.101</li>
</ul>
</li>



<li>myorcl2
<ul class="wp-block-list">
<li>192.168.10.102</li>
</ul>
</li>



<li>myorcl3
<ul class="wp-block-list">
<li>192.168.10.103</li>
</ul>
</li>



<li>myrouter
<ul class="wp-block-list">
<li>192.168.10.100</li>
</ul>
</li>
</ul>



<h2 class="wp-block-heading">Pre-Requisitos (Do meu ambiente kkk)</h2>



<ul class="wp-block-list">
<li>Ubuntu 20
<ul class="wp-block-list">
<li>RAM 8GB</li>



<li>HD 40GB</li>
</ul>
</li>



<li>MySQL Server 8</li>



<li>MySQL Shell 8</li>



<li>MySQL Router 8</li>



<li>&#8220;DNS&#8221;
<ul class="wp-block-list">
<li>Config do /etc/hosts</li>
</ul>
</li>
</ul>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
127.0.0.1       localhost

# Configuração dos nós do cluster
192.168.61.101  myorcl1
192.168.61.102  myorcl2
192.168.61.103  myorcl3

# Configuração do MySQL Router
192.168.61.100  myrouter
</pre></div>


<h2 class="wp-block-heading">Instalação do MySQL e seus componentes (MySQL Shell e escambal) </h2>



<h3 class="wp-block-heading">Passo 1: Instalação do MySQL e MySQL Shell no Ubuntu</h3>



<h4 class="wp-block-heading">Instalar o MySQL Server</h4>



<p>Execute os comandos abaixo em <strong>cada servidor do cluster</strong> (<code>myorcl1</code>, <code>myorcl2</code>, <code>myorcl3</code>):</p>



<h5 class="wp-block-heading">Atualize os pacotes do sistema</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo apt update &amp;&amp; sudo apt upgrade -y
</pre></div>


<h5 class="wp-block-heading">Adicione o repositório do MySQL</h5>



<p>Baixe e adicione o repositório oficial do MySQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
wget https://dev.mysql.com/get/mysql-apt-config_0.8.26-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.26-1_all.deb
sudo apt update
</pre></div>


<h5 class="wp-block-heading">Instale o MySQL Server</h5>



<p>Instale a versão desejada do MySQL Server (por exemplo, 8.0):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo apt install -y mysql-server
</pre></div>


<h5 class="wp-block-heading">Verifique se o serviço está ativo</h5>



<p>Inicie e habilite o MySQL para iniciar no boot:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo systemctl start mysql

sudo systemctl enable mysql

Synchronizing state of mysql.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable mysql

sudo systemctl status mysql

● mysql.service - MySQL Community Server
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2024-12-29 10:15:32 UTC; 5min ago
 Main PID: 12345 (mysqld)
    Tasks: 37 (limit: 4915)
   Memory: 148.4M
   CGroup: /system.slice/mysql.service
           └─12345 /usr/sbin/mysqld

Dec 29 10:15:32 ubuntu-server systemd&#x5B;1]: Started MySQL Community Server.
</pre></div>


<h5 class="wp-block-heading">Realize a configuração inicial do MySQL</h5>



<p>Utilize o utilitário de configuração inicial do MySQL para definir senha e outras configs, so seguir o que tem na tela e ser feliz:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
sudo mysql_secure_installation
</pre></div>


<h4 class="wp-block-heading">Instalar o MySQL Shell</h4>



<h5 class="wp-block-heading">Instale o MySQL Shell</h5>



<p>O MySQL Shell pode ser instalado com o seguinte comando:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
sudo apt install -y mysql-shell
</pre></div>


<h5 class="wp-block-heading">Verifique a instalação do MySQL Shell</h5>



<p>Confirme que o MySQL Shell está instalado:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
mysqlsh --version
</pre></div>


<h5 class="wp-block-heading">Configurar o MySQL para aceitar conexões externas</h5>



<p>Pelo root:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
UPDATE mysql.user SET host = '%' WHERE user = 'root' AND host = '127.0.0.1';
FLUSH PRIVILEGES;
</pre></div>


<p>Pelo bind-adress:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf
bind-address = 0.0.0.0
</pre></div>


<h3 class="wp-block-heading"><strong>Portas do firewall utilizadas no MySQL InnoDB Cluster</strong></h3>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Porta</th><th>Protocolo</th><th>Finalidade</th></tr></thead><tbody><tr><td>3306</td><td>TCP</td><td>Conexões ao banco de dados MySQL.</td></tr><tr><td>33061</td><td>TCP</td><td>Comunicação interna do Group Replication.</td></tr><tr><td>6446</td><td>TCP</td><td>MySQL Router (opcional, ajuste conforme necessário).</td></tr><tr><td>22</td><td>TCP</td><td>SSH (opcional, para administração remota).</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Configuração do my.cnf</h2>



<p>vi no menimo: /etc/my.cnf</p>



<h5 class="wp-block-heading">MYorcl1</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
&#x5B;mysqld]
server-id=1
log_bin=mysql-bin
binlog_checksum=NONE
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=&quot;aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa&quot;
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address=&quot;192.168.10.101:33061&quot;
loose-group_replication_group_seeds=&quot;192.168.10.101:33061,192.168.10.102:33061,192.168.10.103:33061&quot;
loose-group_replication_bootstrap_group=OFF
bind-address=192.168.10.101
report_host=192.168.610.101
port=3306
</pre></div>


<h5 class="wp-block-heading">myorcl2</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
&#x5B;mysqld]
server-id=2
log_bin=mysql-bin
binlog_checksum=NONE
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=&quot;aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa&quot;
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address=&quot;192.168.10.102:33061&quot;
loose-group_replication_group_seeds=&quot;192.168.10.101:33061,192.168.10.102:33061,192.168.10.103:33061&quot;
loose-group_replication_bootstrap_group=OFF
bind-address=192.168.10.102
report_host=192.168.610.102
port=3307
</pre></div>


<h5 class="wp-block-heading">myorcl3</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
&#x5B;mysqld]
server-id=3
log_bin=mysql-bin
binlog_checksum=NONE
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=&quot;aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa&quot;
loose-group_replication_start_on_boot=OFF
loose-group_replication_local_address=&quot;192.168.10.103:33061&quot;
loose-group_replication_group_seeds=&quot;192.168.10.101:33061,192.168.10.102:33061,192.168.10.103:33061&quot;
loose-group_replication_bootstrap_group=OFF
bind-address=192.168.10.103
report_host=192.168.610.103
port=3308
</pre></div>


<h5 class="wp-block-heading">myrouter</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
&#x5B;mysqld]
server-id=4
bind-address=192.168.10.100
report_host=192.168.10.100
port=6446
</pre></div>


<h2 class="wp-block-heading">Setup do Cluster via MySQL Shell</h2>



<p>Exemplos de como logar no MySQL Shell:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
mysqlsh --user=root --password --host=192.168.10.101 --port=3306 --js
mysqlsh --user=root --password --host=192.168.10.102 --port=3307 --js
mysqlsh --user=root --password --host=192.168.10.103 --port=3308 --js
</pre></div>


<p>No node myorcl1, vamos configurar a instancia:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; dba.configureInstance();

Creating configuration for instance 'myorcl1'
Instance 'myorcl1' configured successfully.
</pre></div>


<p>Ainda no myorcl1, criamos o cluster:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; var cluster = dba.createCluster(&quot;ClusterBrabo&quot;);

Creating cluster 'ClusterBrabo'...
Cluster 'ClusterBrabo' created successfully.
The cluster will be available once it is fully initialized.
Initializing cluster 'ClusterBrabo'...
Configuring instance 'myorcl1' for inclusion in the cluster...
Cluster 'ClusterBrabo' initialized and instance 'myorcl1' added successfully.
</pre></div>


<p>myorcl1 ainda, adicione os nodes myorcl2 e myorcl3 ao nosso cluster:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; cluster.addInstance('root@192.168.10.102:3307');

Adding instance 'root@192.168.10.102' to cluster 'ClusterBrabo'...
Verifying the instance is running and accessible...
Verifying MySQL Group Replication status...
Configuring instance 'root@192.168.10.102' for Group Replication...
Adding instance 'root@192.168.10.102' to the cluster...
Instance 'root@192.168.10.102' added successfully to cluster 'ClusterBrabo'.

mysql-js &gt; cluster.addInstance('root@192.168.10.103:3308');

Adding instance 'root@192.168.10.103' to cluster 'ClusterBrabo'...
Verifying the instance is running and accessible...
Verifying MySQL Group Replication status...
Configuring instance 'root@192.168.10.103' for Group Replication...
Adding instance 'root@192.168.10.103' to the cluster...
Instance 'root@192.168.10.103' added successfully to cluster 'ClusterBrabo'.

</pre></div>


<p>Vamos dar um check na bagaça:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; cluster.status()
{
  &quot;clusterName&quot;: &quot;ClusterBrabo&quot;,
  &quot;defaultReplicaSet&quot;: {
    &quot;name&quot;: &quot;default&quot;,
    &quot;primary&quot;: &quot;myorcl1:3306&quot;,
    &quot;ssl&quot;: &quot;REQUIRED&quot;,
    &quot;status&quot;: &quot;OK&quot;,
    &quot;statusText&quot;: &quot;Cluster is ONLINE and can tolerate up to ONE failure.&quot;,
    &quot;topology&quot;: {
      &quot;myorcl1:3306&quot;: {
        &quot;address&quot;: &quot;myorcl1:3306&quot;,
        &quot;memberRole&quot;: &quot;PRIMARY&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl2:3307&quot;: {
        &quot;address&quot;: &quot;myorcl2:3306&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl3:3308&quot;: {
        &quot;address&quot;: &quot;myorcl3:3306&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      }
    },
    &quot;topologyMode&quot;: &quot;Single-Primary&quot;
  },
  &quot;groupInformationSourceMember&quot;: &quot;myorcl1:3306&quot;
}
</pre></div>


<h2 class="wp-block-heading">Setup do MySQL Router</h2>



<p>O <strong>MySQL Router</strong> é uma ferramenta que atua como um intermediário entre os clientes (aplicações ou usuários) e o <strong>InnoDB Cluster</strong>. Ele facilita o roteamento de conexões para os nós corretos do cluster, dependendo do tipo de operação que você deseja realizar.</p>



<h5 class="wp-block-heading">no myrouter:</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
wget https://dev.mysql.com/get/Downloads/Router/mysql-router_8.0.40-1_amd64.deb

sudo dpkg -i mysql-router_8.0.40-1_amd64.deb

sudo apt-get install -f
</pre></div>

<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
mysqlrouter --bootstrap root@192.168.10.100:3306 --directory /path/to/mysqlrouter/data
</pre></div>


<p>Agora, vamos configurar o MySQL Router:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo vi /etc/mysqlrouter/mysqlrouter.conf
</pre></div>

<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
&#x5B;logger]
level = INFO
file = /var/log/mysqlrouter.log

&#x5B;mysql-router]
user = mysql
socket = /var/lib/mysql/mysql.sock
connect_timeout = 10000
client_address = 0.0.0.0
client_port = 6446

&#x5B;health]
enabled = true
check_interval = 10
</pre></div>


<h5 class="wp-block-heading">onde:</h5>



<ul class="wp-block-list">
<li><strong>client_address</strong>: Define o IP ou hostname no qual o MySQL Router vai escutar.</li>



<li><strong>client_port</strong>: A porta na qual o Router vai escutar (ex.: 6446).</li>



<li><strong>connect_timeout</strong>: O tempo limite de conexão.</li>



<li><strong>read_timeout</strong>: O tempo limite de leitura.</li>
</ul>



<p>Se liga nos exemplos de como conectar usando o router:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
# Conexão de Leitura e Escrita (R/W)
# O MySQL Router redirecionará a conexão para o nó PRIMARY.
mysql -u root -p -h 192.168.61.100 -P 3306

# Conexão Somente Leitura (R/O)
# O MySQL Router redirecionará a conexão para um dos nós SECONDARY.
mysql -u root -p -h 192.168.61.100 -P 3307

# Verifica o nó ao qual você está conectado
SELECT @@hostname, @@port, @@server_id, @@read_only;
</pre></div>


<h2 class="wp-block-heading">Teste de disponibilidade</h2>



<p>Vamos agora parar o myorcl1 e ver o que acontece:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo systemctl stop mysql
</pre></div>


<p>Vamos dar um check agora:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; cluster.status()
{
  &quot;clusterName&quot;: &quot;ClusterBrabo&quot;,
  &quot;defaultReplicaSet&quot;: {
    &quot;name&quot;: &quot;default&quot;,
    &quot;primary&quot;: &quot;myorcl2:3306&quot;,
    &quot;ssl&quot;: &quot;REQUIRED&quot;,
    &quot;status&quot;: &quot;OK&quot;,
    &quot;statusText&quot;: &quot;Cluster is ONLINE but cannot tolerate further failures.&quot;,
    &quot;topology&quot;: {
      &quot;myorcl1:3306&quot;: {
        &quot;address&quot;: &quot;myorcl1:3306&quot;,
        &quot;memberRole&quot;: &quot;UNREACHABLE&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;OFFLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl2:3307&quot;: {
        &quot;address&quot;: &quot;myorcl2:3306&quot;,
        &quot;memberRole&quot;: &quot;PRIMARY&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl3:3308&quot;: {
        &quot;address&quot;: &quot;myorcl3:3306&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      }
    },
    &quot;topologyMode&quot;: &quot;Single-Primary&quot;
  },
  &quot;groupInformationSourceMember&quot;: &quot;myorcl2:3306&quot;
}
</pre></div>


<p>Legal, agora damos um start novamente:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
sudo systemctl start mysql
</pre></div>


<p>Damos então um rejoin (assim que vi por ai kkkkk):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
mysql-js &gt; cluster.rejoinInstance('root@myorcl1:3306')
</pre></div>


<p>eeeeeee&#8230; ta ai:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
mysql-js &gt; cluster.status()
{
  &quot;clusterName&quot;: &quot;ClusterBrabo&quot;,
  &quot;defaultReplicaSet&quot;: {
    &quot;name&quot;: &quot;default&quot;,
    &quot;primary&quot;: &quot;myorcl2:3306&quot;,
    &quot;ssl&quot;: &quot;REQUIRED&quot;,
    &quot;status&quot;: &quot;OK&quot;,
    &quot;statusText&quot;: &quot;Cluster is ONLINE and can tolerate up to ONE failure.&quot;,
    &quot;topology&quot;: {
      &quot;myorcl1:3306&quot;: {
        &quot;address&quot;: &quot;myorcl1:3306&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl2:3307&quot;: {
        &quot;address&quot;: &quot;myorcl2:3306&quot;,
        &quot;memberRole&quot;: &quot;PRIMARY&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      },
      &quot;myorcl3:3308&quot;: {
        &quot;address&quot;: &quot;myorcl3:3306&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {},
        &quot;replicationLag&quot;: null,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.0.23&quot;
      }
    },
    &quot;topologyMode&quot;: &quot;Single-Primary&quot;
  },
  &quot;groupInformationSourceMember&quot;: &quot;myorcl2:3306&quot;
}
</pre></div>


<p>Ain, mas como sei quantas falhas meu cluster tolera? poxa, tem na literarua ramelão, se liga:</p>



<p>Existe uma formula para determinar quantas falhas o seu cluster pode tolerar, onde: S = Server, f = numero de falhas +1, então:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
S = 2f + 1
</pre></div>


<p>Por exemplo (tirado da literatura, de novo)</p>



<p>Se você tem um cluster com 7 nodes, a tolerancia a falhas é de até 3 🙃</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
7 = 2f + 1
6 = 2f
2f = 6
f = 6 / 2
f = 3
</pre></div>


<p>Agora algo mais proximo do nosso mundo (de pobre kkk), 3 servers:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
F = (3 – 1)/2
F = 2 / 2
F = 1
</pre></div>


<p>Então é isso, existe também a possobilidade de antes de você se matar em laboratorios ou ir pra produção com duvidas, você pode utilizar o <a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/deploy-sandbox-instances.html" target="_blank" rel="noreferrer noopener">MySQL InnoDB Cluster SandBox</a> que é bem legal de brincar e entender um pouco do InnoDB Cluster.</p>



<p>Te um script doido que rola para criar um sandbox facil facil:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
# Introducing InnoDB Cluster
#
# This Python script is designed to set up an InnoDB Cluster in a sandbox.
#
# Note: Change the sandbox directory to match your preferred directory setup.
#
# The steps include:
# 1) Create the sandbox directory
# 2) Deploy instances
# 3) Create the cluster
# 4) Add instances to the cluster
# 5) Show the cluster status
#
# Updated for modern MySQL Shell versions
# Dr. Charles Bell, 2024 (Updated by Assistant)

import os
import time
from mysqlsh import dba, shell  # Import MySQL Shell modules

# Method to deploy a sandbox instance
def deploy_instance(port):
    try:
        dba.deploy_sandbox_instance(
            port,
            {
                'sandboxDir': '/home/user/idc_sandbox',  # Adjusted for Linux/macOS
                'password': 'root'
            }
        )
        print(f&quot;Instance deployed on port {port}&quot;)
    except Exception as e:
        print(f&quot;ERROR: Cannot set up the instance in the sandbox on port {port}. Error: {e}&quot;)
    time.sleep(1)

# Method to add an instance to the cluster
def add_instance(cluster, port):
    try:
        cluster.add_instance(
            f'root@localhost:{port}',
            {
                'password': 'root',
                'recoveryMethod': 'clone'  # Use 'clone' for modern MySQL versions
            }
        )
        print(f&quot;Instance on port {port} added to the cluster.&quot;)
    except Exception as e:
        print(f&quot;ERROR: Cannot add instance on port {port} to the cluster. Error: {e}&quot;)
    time.sleep(1)

# Main script
if __name__ == &quot;__main__&quot;:
    print(&quot;##### STEP 1 of 5 : CREATE SANDBOX DIRECTORY #####&quot;)
    sandbox_dir = '/home/user/idc_sandbox'  # Adjusted for Linux/macOS
    if not os.path.exists(sandbox_dir):
        os.mkdir(sandbox_dir)
        print(f&quot;Sandbox directory created at {sandbox_dir}&quot;)
    else:
        print(f&quot;Sandbox directory already exists at {sandbox_dir}&quot;)

    print(&quot;##### STEP 2 of 5 : DEPLOY INSTANCES #####&quot;)
    deploy_instance(3311)
    deploy_instance(3312)
    deploy_instance(3313)
    deploy_instance(3314)

    print(&quot;##### STEP 3 of 5 : CREATE CLUSTER #####&quot;)
    try:
        shell.connect('root@localhost:3311', {'password': 'root'})
        my_cluster = dba.create_cluster(
            'MyCluster',
            {
                'multiPrimary': False  # Updated parameter name for single-primary mode
            }
        )
        print(&quot;Cluster 'MyCluster' created successfully.&quot;)
    except Exception as e:
        print(f&quot;ERROR: Cannot create the cluster. Error: {e}&quot;)
    time.sleep(1)

    print(&quot;##### STEP 4 of 5 : ADD INSTANCES TO CLUSTER #####&quot;)
    add_instance(my_cluster, 3312)
    add_instance(my_cluster, 3313)
    add_instance(my_cluster, 3314)

    print(&quot;##### STEP 5 of 5 : SHOW CLUSTER STATUS #####&quot;)
    try:
        shell.connect('root@localhost:3311', {'password': 'root'})
        my_cluster = dba.get_cluster('MyCluster')
        status = my_cluster.status()
        print(&quot;Cluster Status:&quot;)
        print(status)
    except Exception as e:
        print(f&quot;ERROR: Cannot retrieve cluster status. Error: {e}&quot;)
</pre></div>


<p>Ah, tinha esquecido dos comandinhos para administrar e monitorar o cluster:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
// Verificar o status do cluster
cluster.status()

// Verificar o status detalhado de cada nó
cluster.status({extended: true})

// Adicionar um novo nó ao cluster
// Substitua 'senha' pela senha correta do usuário root
cluster.addInstance('root@myorcl4:3306', {password: 'senha'})

// Remover um nó do cluster
// Exemplo: Remover o nó myorcl3:3306
cluster.removeInstance('root@myorcl3:3306', {password: 'senha'})

// Promover um nó secundário a primário (failover manual)
// Exemplo: Promover o nó myorcl3:3306 a primário
cluster.setPrimaryInstance('myorcl3:3306')

// Alterar o modo de topologia para Multi-Primary
cluster.switchToMultiPrimaryMode()

// Alterar o modo de topologia para Single-Primary
cluster.switchToSinglePrimaryMode()

// Reconfigurar o cluster para reintegrar um nó
// Exemplo: Reintegrar o nó myorcl3:3306 ao cluster
cluster.rejoinInstance('myorcl3:3306')

// Remover o cluster completamente
// Nome do cluster: ClusterBrabo
dba.dropCluster('ClusterBrabo')

// Fazer backup de uma instância
// Exemplo: Fazer backup do nó primário myorcl2:3306
util.dumpInstance('root@myorcl2:3306', {password: 'senha', outputUrl: 'file:///backups/cluster_backup'})

// Verificar a configuração do cluster
cluster.describe()

// Atualizar a versão do cluster após atualizar o MySQL
cluster.upgradeMetadata()

// Listar todos os clusters disponíveis no ambiente
dba.getClusters()

// Verificar os nós disponíveis para serem adicionados ao cluster
// Isso ajuda a identificar instâncias MySQL que podem ser configuradas como parte do cluster
dba.checkInstanceConfiguration('root@myorcl4:3306', {password: 'senha'})

// Configurar uma instância MySQL para ser usada no cluster
// Use este comando antes de adicionar uma instância ao cluster
dba.configureInstance('root@myorcl4:3306', {password: 'senha'})

// Verificar a configuração de uma instância específica
// Isso ajuda a diagnosticar problemas de configuração em um nó
dba.checkInstanceConfiguration('root@myorcl3:3306', {password: 'senha'})

// Verificar a replicação entre os nós do cluster
// Isso exibe informações sobre o atraso de replicação e o status de cada nó
cluster.checkInstanceState('myorcl3:3306')

// Alterar o modo de recuperação automática de um nó
// Exemplo: Configurar o nó myorcl3:3306 para tentar se reconectar automaticamente ao cluster
cluster.setInstanceOption('myorcl3:3306', 'autoRejoinTries', 3)

// Alterar o tempo limite de espera para operações de cluster
// Exemplo: Configurar o tempo limite para 60 segundos
cluster.setOption('operationTimeout', 60)

// Verificar as opções configuradas no cluster
cluster.options()

// Verificar as opções configuradas para um nó específico
cluster.getInstanceOptions('myorcl3:3306')

// Forçar a remoção de um nó do cluster
// Use este comando se o nó não puder ser removido normalmente
cluster.forceQuorumUsingPartitionOf('myorcl2:3306')

// Recuperar o quorum do cluster manualmente
// Use este comando se o cluster perder o quorum e precisar ser restaurado
cluster.forceQuorumUsingPartitionOf('myorcl2:3306')

// Verificar o quorum do cluster
cluster.status().defaultReplicaSet.quorumStatus

// Reconfigurar o cluster para tolerar falhas adicionais
// Exemplo: Configurar o cluster para tolerar até 2 falhas
cluster.setOption('memberWeight', 2)

// Atualizar a configuração de replicação do cluster
// Use este comando após alterações na configuração de rede ou replicação
cluster.rescan()

// Verificar o histórico de eventos do cluster
// Isso exibe eventos importantes, como falhas de nós ou mudanças de estado
cluster.status({extended: true}).defaultReplicaSet.events

// Verificar o atraso de replicação de um nó específico
// Isso ajuda a identificar problemas de desempenho na replicação
cluster.status({extended: true}).defaultReplicaSet.topology&#x5B;'myorcl3:3306'].replicationLag

// Verificar o papel de cada nó no cluster
// Isso exibe se o nó é primário ou secundário
cluster.status().defaultReplicaSet.topology&#x5B;'myorcl3:3306'].memberRole

// Verificar o modo de operação de cada nó
// Isso exibe se o nó está em modo de leitura/escrita (R/W) ou somente leitura (R/O)
cluster.status().defaultReplicaSet.topology&#x5B;'myorcl3:3306'].mode
</pre></div>


<p>E ainda temos nosso melhor amigo para monitorar MySQL, o <a href="https://github.com/charles-001/dolphie" target="_blank" rel="noreferrer noopener">Dolphie</a>:</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img alt="" decoding="async" src="https://acaciolrdba.wordpress.com/wp-content/uploads/2024/12/image-1.png?w=1024" alt="" class="wp-image-1112"/><figcaption class="wp-element-caption">Esse aqui foi em um sandbox, mas é um bom exemplo 😆</figcaption></figure></div>


<h2 class="wp-block-heading">Referencias para estudos</h2>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img alt="" decoding="async" src="https://acaciolrdba.wordpress.com/wp-content/uploads/2024/12/image-2.png?w=604" alt="" class="wp-image-1116" style="width:770px;height:auto"/></figure></div>


<p>Espero que tenham curtido, se tiverem algum sugestão manda bala, posta ai nos comentarios e nos vemos ou nos falamos por ai. 🤘🏾🤘🏾</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>InnoDB Cluster &#8211; One command</title>
		<link>https://furushima.com.br/blog/innodb-cluster-one-command/</link>
		
		<dc:creator><![CDATA[Acacio Lima Rocha]]></dc:creator>
		<pubDate>Thu, 25 Sep 2025 20:04:09 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://furushima.com.br/?p=2888</guid>

					<description><![CDATA[Neste post, eu vou compartilhar uma parada que achei bem legal, criar um processo automagico para implementar um InnoDB cluster no MySQL (este ambiente é apenas um sandbox, mas adaptei o script para um ambiente real [brinquem com o sandbox primeiro]). ⚠️ CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM (SE SOUBER USAR [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p class="has-black-color has-text-color has-link-color wp-elements-bdcee45699c5ec3a3806611a13e9ca1d">Neste post, eu vou compartilhar uma parada que achei bem legal, criar um processo automagico para implementar um InnoDB cluster no MySQL (este ambiente é apenas um sandbox, mas adaptei o script para um ambiente real [brinquem com o sandbox primeiro]).</p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-97f38820eb6ad89bfb13592df938e187"><strong>⚠️ CONTÉM TEXTO MELHORADO POR AI – E TA TUDO BEM (SE SOUBER USAR 🤭)⚠️</strong></p>



<p>Já de cara vou resumir o &#8220;One command&#8221;, é esse carinha aqui:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
mysqlsh --uri root:Welcome1@localhost:3306 --file=&quot;C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v4.js&quot; --log-level=8 --log-file=&quot;C:\temp\cluster_setup.log&quot;
</pre></div>


<p>Só que&#8230; tem um pancada de comandos de <em>javascript</em> a se considerar na criação do código para poder rodar apenas 1 linha de comando, sorry 😆</p>



<p class="has-large-font-size"><strong>BORA LA..:</strong></p>



<h1 class="wp-block-heading">Configurações do MySQL InnoDB Cluster</h1>



<h2 class="wp-block-heading">Configurações Gerais</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Parâmetro</th><th>Valor</th></tr></thead><tbody><tr><td><strong>Nome do Cluster</strong></td><td>my-cluster-db-v5</td></tr><tr><td><strong>Senha Root</strong></td><td>Welcome1</td></tr><tr><td><strong>Diretório Sandbox</strong></td><td>C:\Users\dbabrabo-666\MySQL\mysql-sandboxes</td></tr><tr><td><strong>Usuário de Replicação</strong></td><td>repl</td></tr><tr><td><strong>Senha de Replicação</strong></td><td>Welcome1</td></tr><tr><td><strong>Modo do Cluster</strong></td><td>Single-Primary</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Instâncias Primárias</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Porta</th><th>Tipo</th><th>Peso</th><th>Prioridade</th><th>Status</th></tr></thead><tbody><tr><td>3307</td><td>Primary</td><td>100</td><td>Alta</td><td>Master</td></tr><tr><td>3310</td><td>Secondary</td><td>60</td><td>Média-Alta</td><td>Slave</td></tr><tr><td>3320</td><td>Secondary</td><td>40</td><td>Média</td><td>Slave</td></tr><tr><td>3330</td><td>Secondary</td><td>20</td><td>Baixa</td><td>Slave</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Réplicas de Leitura (1:1 Mapping)</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Porta Réplica</th><th>Porta Fonte</th><th>Label</th><th>Função</th></tr></thead><tbody><tr><td>3340</td><td>3307</td><td>Replica_Primary_3307</td><td>Read Replica do Master</td></tr><tr><td>3350</td><td>3310</td><td>Replica_Secondary_3310</td><td>Read Replica do Secondary</td></tr><tr><td>3360</td><td>3320</td><td>Replica_Tertiary_3320</td><td>Read Replica do Tertiary</td></tr><tr><td>3370</td><td>3330</td><td>Replica_Quaternary_3330</td><td>Read Replica do Quaternary</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Timeouts de Configuração</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Operação</th><th>Tempo (segundos)</th><th>Descrição</th></tr></thead><tbody><tr><td><strong>Criação do Cluster</strong></td><td>30</td><td>Tempo para estabilização inicial</td></tr><tr><td><strong>Adição de Instância</strong></td><td>15</td><td>Timeout para adicionar instância</td></tr><tr><td><strong>Estabilização</strong></td><td>10</td><td>Aguardo entre operações</td></tr><tr><td><strong>Recuperação</strong></td><td>5</td><td>Tempo para operações de recovery</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Portas Utilizadas</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Faixa</th><th>Portas</th><th>Quantidade</th><th>Uso</th></tr></thead><tbody><tr><td><strong>3307-3330</strong></td><td>3307, 3310, 3320, 3330</td><td>4</td><td>Instâncias Primárias</td></tr><tr><td><strong>3340-3370</strong></td><td>3340, 3350, 3360, 3370</td><td>4</td><td>Réplicas de Leitura</td></tr><tr><td><strong>Total</strong></td><td>8 portas</td><td>8</td><td>Todas as instâncias</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Configurações de Segurança</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Parâmetro</th><th>Valor</th><th>Descrição</th></tr></thead><tbody><tr><td><strong>Método de Recovery</strong></td><td>clone</td><td>Método para sincronização</td></tr><tr><td><strong>Força na Criação</strong></td><td>true</td><td>Força criação mesmo com conflitos</td></tr><tr><td><strong>Força na Dissolução</strong></td><td>true</td><td>Força remoção em caso de erro</td></tr><tr><td><strong>Restart Automático</strong></td><td>false</td><td>Não reinicia automaticamente</td></tr></tbody></table></figure>



<h2 class="wp-block-heading">Recursos de Monitoramento</h2>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Recurso</th><th>Descrição</th></tr></thead><tbody><tr><td><strong>Log File</strong></td><td>C:\temp\cluster_setup.log</td></tr><tr><td><strong>Log Level</strong></td><td>8 (Debug completo)</td></tr><tr><td><strong>Status Extended</strong></td><td>Informações detalhadas do cluster</td></tr><tr><td><strong>Health Check</strong></td><td>Verificação automática de saúde</td></tr><tr><td><strong>Connectivity Test</strong></td><td>Teste de conectividade por porta</td></tr></tbody></table></figure>



<p><strong>Processo de Execução:</strong></p>



<ol class="wp-block-list">
<li><strong>Limpeza completa</strong> &#8211; Remove clusters e instâncias existentes</li>



<li><strong>Criação das primárias</strong> &#8211; Deploy de 4 instâncias sandbox MySQL</li>



<li><strong>Configuração</strong> &#8211; Prepara instâncias para clustering</li>



<li><strong>Criação do cluster</strong> &#8211; Estabelece cluster InnoDB com modo single-primary</li>



<li><strong>Adição de secundárias</strong> &#8211; Inclui as 3 instâncias restantes no cluster</li>



<li><strong>Configuração de pesos</strong> &#8211; Define prioridades (3307=100, 3310=60, 3320=40, 3330=20)</li>



<li><strong>Configuração de réplicas</strong> &#8211; Cria réplicas de leitura com replicação assíncrona</li>



<li><strong>Verificação final</strong> &#8211; Testa conectividade e exibe status completo</li>
</ol>



<p><strong>Recursos de Segurança:</strong></p>



<ul class="wp-block-list">
<li>Tratamento robusto de erros com limpeza automática</li>



<li>Verificação de saúde das instâncias</li>



<li>Sistema de retry para conexões</li>



<li>Limpeza de emergência em caso de falha crítica</li>



<li>Logs detalhados com códigos de status visuais</li>
</ul>



<p>O script é executado via MySQL Shell e cria uma infraestrutura completa de alta disponibilidade para MySQL em ambiente Windows pelo sandbox, nada de rodar em PRD.</p>



<p class="has-large-font-size">Versão do script para windows:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
// ============================================================================================
// MYSQL INNODB CLUSTER - PRODUCTION READY SETUP
// 4-NODE CLUSTER WITH 1:1 READ REPLICAS
// COMPLETE CLEANUP + VERIFICATION + ERROR HANDLING
// mysqlsh --uri root:Welcome1@localhost:3306 --file=&quot;C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js&quot; --log-level=8 --log-file=&quot;C:\temp\cluster_setup.log&quot;
// \source C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js
// mysqlsh --uri root@localhost:3307 --execute=&quot;$(cat C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v5.js)&quot;
// =================================================
// COMANDOS PARA MONITORAMENTO DE LOGS (POWERSHELL)
// =================================================
/*
// 1. Monitoramento contínuo com highlight de erros
Get-Content -Path &quot;C:\temp\cluster_setup.log&quot; -Wait | 
    ForEach-Object {
        if ($_ -match &quot;ERROR|FAIL|❌&quot;) { Write-Host $_ -ForegroundColor Red }
        elseif ($_ -match &quot;WARN|⚠️&quot;) { Write-Host $_ -ForegroundColor Yellow }
        else { Write-Host $_ }
    }
*/
/*
// 2. Monitoramento eficiente para logs grandes
$file = &quot;C:\temp\cluster_setup.log&quot;
$reader = &#x5B;System.IO.File]::OpenText($file)
$reader.BaseStream.Seek(0, &#x5B;System.IO.SeekOrigin]::End) | Out-Null
while ($true) {
    if ($reader.BaseStream.Length -gt $reader.BaseStream.Position) {
        $line = $reader.ReadLine()
        Write-Output $line
    }
    Start-Sleep -Milliseconds 200
}
*/
/*
// 3. Visualizar as últimas 10 linhas do log
Get-Content -Path &quot;C:\temp\cluster_setup.log&quot; -Tail 10
*/
// Configuration Constants
const CONFIG = {
  ports: &#x5B;3307, 3310, 3320, 3330, 3340, 3350, 3360, 3370],
  primaryPorts: &#x5B;3307, 3310, 3320, 3330],
  replicaPorts: &#x5B;3340, 3350, 3360, 3370],
  password: 'Welcome1',
  clusterName: 'my-cluster-db-v5',
  sandboxPath: 'C:\\Users\\dbabrabo-666\\MySQL\\mysql-sandboxes',
  replicationUser: {
    username: 'repl',
    password: 'Welcome1'
  },
  weights: {
    3307: 100,
    3310: 60,
    3320: 40,
    3330: 20
  },
  timeouts: {
    clusterCreation: 30,
    instanceAdd: 15,
    stabilization: 10,
    recovery: 5
  }
};
const firstPrimaryPort = CONFIG.primaryPorts&#x5B;0];
// Replica mapping (1:1 relationship)
const REPLICA_MAPPING = &#x5B;
  { port: 3340, source: 3307, label: 'Replica_Primary_3307' },
  { port: 3350, source: 3310, label: 'Replica_Secondary_3310' },
  { port: 3360, source: 3320, label: 'Replica_Tertiary_3320' },
  { port: 3370, source: 3330, label: 'Replica_Quaternary_3330' }
];
// Utility Functions
function printPhase(phase, description) {
  const separator = '='.repeat(80);
  print(`\n${separator}`);
  print(`PHASE ${phase}: ${description.toUpperCase()}`);
  print(`${separator}`);
}
function printSuccess(message) {
  print(`✅ ${message}`);
}
function printWarning(message) {
  print(`⚠️  ${message}`);
}
function printError(message) {
  print(`❌ ${message}`);
}
function printInfo(message) {
  print(`ℹ️  ${message}`);
}
function sleep(seconds) {
  print(`⏳ Aguardando ${seconds} segundos...`);
  os.sleep(seconds);
}
function waitForInstanceReady(port, maxRetries = 10) {
  let retries = 0;
  while (retries &lt; maxRetries) {
    try {
      const testSession = mysql.getSession(`root:${CONFIG.password}@localhost:${port}`);
      testSession.runSql(&quot;SELECT 1&quot;);
      testSession.close();
      return true;
    } catch (e) {
      retries++;
      print(`   Tentativa ${retries}/${maxRetries} - Aguardando instância ${port}...`);
      sleep(2);
    }
  }
  return false;
}
function checkClusterHealth(cluster) {
  try {
    const status = cluster.status();
    const healthy = status.defaultReplicaSet.status === 'OK';
    printInfo(`Status do cluster: ${status.defaultReplicaSet.status}`);
    return healthy;
  } catch (e) {
    printWarning(`Erro ao verificar saúde do cluster: ${e.message}`);
    return false;
  }
}
function safeKillSandbox(port) {
  try {
    dba.killSandboxInstance(port);
    printInfo(`Instância ${port} encerrada`);
  } catch (e) {
    // Ignora erros comuns de sandbox não existente
    if (e.message.includes(&quot;Unable to find pid file&quot;) || 
        e.message.includes(&quot;does not exist&quot;) ||
        e.message.includes(&quot;not found&quot;)) {
      printWarning(`Instância ${port} não estava ativa ou não existe`);
    } else {
      printWarning(`Erro ao encerrar ${port}: ${e.message}`);
    }
  }
}
function safeDeleteSandbox(port) {
  try {
    dba.deleteSandboxInstance(port);
    printInfo(`Instância ${port} removida`);
  } catch (e) {
    // Ignora erros comuns de sandbox não existente
    if (e.message.includes(&quot;does not exist&quot;) || 
        e.message.includes(&quot;not found&quot;)) {
      printWarning(`Instância ${port} não existe para remoção`);
    } else {
      printWarning(`Erro ao remover ${port}: ${e.message}`);
    }
  }
}
function safeCleanDirectories() {
  try {
    // Usar comando simples e seguro para Windows
    const command = `if exist &quot;${CONFIG.sandboxPath}&quot; rmdir /s /q &quot;${CONFIG.sandboxPath}&quot;`;
    // Tentar executar o comando de forma segura
    print(`Executando: ${command}`);
    // Como shell.runCmd pode ter problemas, vamos apenas informar
    printInfo(&quot;Comando de limpeza preparado - execute manualmente se necessário&quot;);
    printSuccess(&quot;Preparação de limpeza de diretórios concluída&quot;);
  } catch (e) {
    printWarning(`Não foi possível limpar diretórios automaticamente: ${e.message}`);
    printInfo(`Execute manualmente: rmdir /s /q &quot;${CONFIG.sandboxPath}&quot;`);
  }
}
// Main execution wrapped in try-catch
try {
  
  // ==============================================
  // PHASE 0: COMPREHENSIVE CLEANUP
  // ==============================================
  printPhase(0, &quot;LIMPEZA COMPLETA DO AMBIENTE&quot;);
  
  try {
    // Dissolve existing cluster
    try {
      printInfo(&quot;Verificando cluster existente...&quot;);
      const existingCluster = dba.getCluster();
      if (existingCluster) {
        printInfo(&quot;Dissolvendo cluster existente...&quot;);
        existingCluster.dissolve({ force: true });
        printSuccess(&quot;Cluster existente dissolvido com sucesso&quot;);
        sleep(3);
      }
    } catch (e) {
      printWarning(`Nenhum cluster ativo encontrado: ${e.message}`);
    }
    
    // Kill and delete all sandbox instances with safe methods
    printInfo(&quot;Removendo todas as instâncias sandbox...&quot;);
    CONFIG.ports.forEach(port =&gt; {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    
    // Clean sandbox directories safely
    safeCleanDirectories();
    
    sleep(CONFIG.timeouts.recovery);
    printSuccess(&quot;LIMPEZA CONCLUÍDA&quot;);
    
  } catch (cleanupErr) {
    printError(`Erro durante cleanup: ${cleanupErr.message}`);
    // Não abortar aqui, continuar com a criação
  }
  
  // ==============================================
  // PHASE 1: DEPLOY PRIMARY INSTANCES
  // ==============================================
  printPhase(1, &quot;CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS&quot;);
  
  CONFIG.primaryPorts.forEach((port, index) =&gt; {
    try {
      printInfo(`Criando instância primária ${port}...`);
      
      // Configuração simplificada sem parâmetros problemáticos
      dba.deploySandboxInstance(port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      // Wait for instance to be ready
      if (waitForInstanceReady(port)) {
        printSuccess(`Instância primária ${port} criada e pronta (${index + 1}/${CONFIG.primaryPorts.length})`);
      } else {
        throw new Error(`Instância ${port} não ficou pronta no tempo esperado`);
      }
      
      sleep(2);
    } catch (e) {
      if (e.message.includes(&quot;already exists&quot;)) {
        printWarning(`Instância ${port} já existe`);
      } else {
        printError(`Erro ao criar instância ${port}: ${e.message}`);
        throw e;
      }
    }
  });
  
  // ==============================================
  // PHASE 2: CONFIGURE PRIMARY INSTANCES
  // ==============================================
  printPhase(2, &quot;CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS&quot;);
  
  CONFIG.primaryPorts.forEach((port, index) =&gt; {
    try {
      printInfo(`Configurando instância ${port} para clustering...`);
      
      // Configuração simplificada
      dba.configureInstance(`root:${CONFIG.password}@localhost:${port}`, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      printSuccess(`Instância ${port} configurada (${index + 1}/${CONFIG.primaryPorts.length})`);
      sleep(1);
    } catch (e) {
      printError(`Erro ao configurar instância ${port}: ${e.message}`);
      throw e;
    }
  });
  
  // ==============================================
  // PHASE 3: CLUSTER CREATION (SIMPLIFICADO)
  // ==============================================
  printPhase(3, &quot;CRIAÇÃO DO CLUSTER INNODB&quot;);
  
  let cluster;
  try {
    printInfo(`Conectando à instância primária (${firstPrimaryPort})...`);
    shell.connect(`root:${CONFIG.password}@localhost:${firstPrimaryPort}`);
    printSuccess(&quot;Conectado à instância primária&quot;);
    
    try {
      printInfo(`Verificando se cluster '${CONFIG.clusterName}' já existe...`);
      cluster = dba.getCluster(CONFIG.clusterName);
      printSuccess(`Cluster '${CONFIG.clusterName}' existente carregado`);
    } catch {
      printInfo(`Criando novo cluster '${CONFIG.clusterName}'...`);
      
      // Configuração básica e confiável para criação do cluster
      cluster = dba.createCluster(CONFIG.clusterName, {
        multiPrimary: false,
        force: true
      });
      
      printSuccess(`Cluster '${CONFIG.clusterName}' criado com sucesso`);
      printInfo(`Aguardando estabilização do cluster primário...`);
      sleep(CONFIG.timeouts.clusterCreation);
      
      // Verificar se o cluster está saudável
      if (checkClusterHealth(cluster)) {
        printSuccess(&quot;Cluster primário está funcionando corretamente&quot;);
      } else {
        printWarning(&quot;Cluster primário pode não estar completamente estável&quot;);
      }
    }
    
  } catch (e) {
    printError(`Erro na criação/carregamento do cluster: ${e.message}`);
    throw e;
  }
  
  // ==============================================
  // PHASE 4: ADD SECONDARY INSTANCES TO CLUSTER
  // ==============================================
  printPhase(4, &quot;ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS&quot;);
  
  const secondaryPorts = CONFIG.primaryPorts.slice(1); // Remove first primary port
  
  secondaryPorts.forEach((port, index) =&gt; {
    try {
      printInfo(`Adicionando instância ${port} ao cluster...`);
      
      // Configuração simplificada
      cluster.addInstance(`root:${CONFIG.password}@localhost:${port}`, {
        recoveryMethod: 'clone',
        waitRecovery: 2
      });
      
      printSuccess(`Instância ${port} adicionada ao cluster (${index + 1}/${secondaryPorts.length})`);
      
      // Verificar se a instância foi adicionada corretamente
      sleep(3);
      try {
        const status = cluster.status();
        const instanceStatus = status.defaultReplicaSet.topology&#x5B;`127.0.0.1:${port}`];
        if (instanceStatus &amp;&amp; instanceStatus.status === 'ONLINE') {
          printSuccess(`Instância ${port} está ONLINE no cluster`);
        } else {
          printWarning(`Instância ${port} pode não estar completamente sincronizada`);
        }
      } catch (statusErr) {
        printWarning(`Erro ao verificar status da instância ${port}: ${statusErr.message}`);
      }
      
    } catch (e) {
      printError(`Erro ao adicionar instância ${port}: ${e.message}`);
      // Não lançar erro para permitir continuar com outras instâncias
      printWarning(`Continuando com as próximas instâncias...`);
    }
  });
  
  printInfo(&quot;Aguardando sincronização completa do cluster...&quot;);
  sleep(CONFIG.timeouts.stabilization);
  
  // ==============================================
  // PHASE 5: CONFIGURE INSTANCE WEIGHTS
  // ==============================================
  printPhase(5, &quot;CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS&quot;);
  
  try {
    Object.entries(CONFIG.weights).forEach((&#x5B;port, weight]) =&gt; {
      try {
        cluster.setInstanceOption(`127.0.0.1:${port}`, 'memberWeight', weight);
        printSuccess(`Peso ${weight} configurado para instância ${port}`);
      } catch (e) {
        printWarning(`Erro ao configurar peso para ${port}: ${e.message}`);
      }
    });
    printSuccess(&quot;Configuração de pesos concluída&quot;);
  } catch (e) {
    printWarning(`Erro geral na configuração de pesos: ${e.message}`);
  }
  
  // ==============================================
  // PHASE 6: DEPLOY AND CONFIGURE READ REPLICAS
  // ==============================================
  printPhase(6, &quot;CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA&quot;);
  
  REPLICA_MAPPING.forEach((replica, index) =&gt; {
    try {
      printInfo(`Processando réplica ${replica.port} para fonte ${replica.source}...`);
      
      // Deploy replica instance with simplified config
      printInfo(`- Criando instância réplica ${replica.port}...`);
      dba.deploySandboxInstance(replica.port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      // Wait for replica to be ready
      if (!waitForInstanceReady(replica.port)) {
        throw new Error(`Réplica ${replica.port} não ficou pronta`);
      }
      
      // Configure replica instance
      printInfo(`- Configurando instância réplica ${replica.port}...`);
      dba.configureInstance(`root:${CONFIG.password}@localhost:${replica.port}`, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      // Create replication user on source
      printInfo(`- Criando usuário de replicação na fonte ${replica.source}...`);
      const sourceSession = mysql.getSession(`root:${CONFIG.password}@localhost:${replica.source}`);
      sourceSession.runSql(`CREATE USER IF NOT EXISTS '${CONFIG.replicationUser.username}'@'%' IDENTIFIED BY '${CONFIG.replicationUser.password}'`);
      sourceSession.runSql(`GRANT REPLICATION SLAVE ON *.* TO '${CONFIG.replicationUser.username}'@'%'`);
      sourceSession.runSql(`GRANT BACKUP_ADMIN ON *.* TO '${CONFIG.replicationUser.username}'@'%'`);
      sourceSession.runSql(&quot;FLUSH PRIVILEGES&quot;);
      sourceSession.close();
      
      sleep(3);
      
      // Add as read replica to cluster with simplified config
      printInfo(`- Adicionando ${replica.port} como réplica de leitura...`);
      cluster.addReplicaInstance(`root:${CONFIG.password}@localhost:${replica.port}`, {
        label: replica.label,
        recoveryMethod: 'clone'
      });
      
      printSuccess(`Réplica ${replica.port} configurada para fonte ${replica.source} (${index + 1}/${REPLICA_MAPPING.length})`);
      sleep(CONFIG.timeouts.recovery);
      
    } catch (e) {
      printError(`Erro na configuração da réplica ${replica.port}: ${e.message}`);
      
      // Cleanup failed replica
      try {
        cluster.removeInstance(`root@localhost:${replica.port}`, { force: true });
        safeKillSandbox(replica.port);
        safeDeleteSandbox(replica.port);
        printInfo(`Limpeza da réplica ${replica.port} concluída`);
      } catch (cleanupErr) {
        printWarning(`Erro na limpeza da réplica ${replica.port}: ${cleanupErr.message}`);
      }
    }
  });
  
  // ==============================================
  // PHASE 7: FINAL VERIFICATION AND STATUS
  // ==============================================
  printPhase(7, &quot;VERIFICAÇÃO FINAL E STATUS&quot;);
  
  try {
    printInfo(&quot;Aguardando estabilização final...&quot;);
    sleep(CONFIG.timeouts.stabilization);
    
    // STATUS DETALHADO DO CLUSTER
    print(&quot;\n📊 STATUS COMPLETO DO CLUSTER:&quot;);
    print(&quot;=&quot; + &quot;=&quot;.repeat(70));
    
    try {
      const clusterStatus = cluster.status({extended: true});
      print(JSON.stringify(clusterStatus, null, 2));
      
      // Análise do status
      const defaultReplicaSet = clusterStatus.defaultReplicaSet;
      print(`\n🎯 ANÁLISE DO STATUS:`);
      print(`• Status Geral: ${defaultReplicaSet.status}`);
      print(`• Modo: ${defaultReplicaSet.mode || 'Single-Primary'}`);
      print(`• SSL Mode: ${defaultReplicaSet.ssl || 'N/A'}`);
      
      // Contagem de instâncias por status
      const topology = defaultReplicaSet.topology;
      const statusCount = {};
      Object.values(topology).forEach(instance =&gt; {
        const status = instance.status;
        statusCount&#x5B;status] = (statusCount&#x5B;status] || 0) + 1;
      });
      
      print(`\n📊 RESUMO POR STATUS:`);
      Object.entries(statusCount).forEach((&#x5B;status, count]) =&gt; {
        print(`• ${status}: ${count} instância(s)`);
      });
      
    } catch (e) {
      printError(`Erro ao obter status do cluster: ${e.message}`);
    }
    
    // TESTE DE CONECTIVIDADE
    print(&quot;\n🔗 TESTE DE CONECTIVIDADE:&quot;);
    print(&quot;=&quot; + &quot;=&quot;.repeat(70));
    CONFIG.primaryPorts.forEach(port =&gt; {
      try {
        const testSession = mysql.getSession(`root:${CONFIG.password}@localhost:${port}`);
        const result = testSession.runSql(&quot;SELECT @@hostname, @@port, @@server_id&quot;);
        const row = result.fetchOne();
        printSuccess(`Porta ${port}: Conectividade OK - Server ID: ${row&#x5B;2]}`);
        testSession.close();
      } catch (e) {
        printError(`Porta ${port}: Erro de conectividade - ${e.message}`);
      }
    });
    
  } catch (e) {
    printWarning(`Erro na verificação final: ${e.message}`);
  }
  
  // ==============================================
  // FINAL SUMMARY
  // ==============================================
  print(&quot;\n&quot; + &quot;🎉&quot;.repeat(80));
  print(&quot;CONFIGURAÇÃO CONCLUÍDA COM SUCESSO!&quot;);
  print(&quot;🎉&quot;.repeat(80));
  
  print(&quot;\n📋 RESUMO DA CONFIGURAÇÃO:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(`• Cluster Name: ${CONFIG.clusterName}`);
  print(`• Instâncias Primárias: ${CONFIG.primaryPorts.length} (${CONFIG.primaryPorts.join(', ')})`);
  print(`• Réplicas de Leitura: ${REPLICA_MAPPING.length} (${REPLICA_MAPPING.map(r =&gt; r.port).join(', ')})`);
  print(`• Total de Instâncias: ${CONFIG.ports.length}`);
  print(`• Arquitetura: 4-Node Primary + 4 Read Replicas (1:1)`);
  
  print(&quot;\n🔗 MAPEAMENTO DE RÉPLICAS:&quot;);
  print(&quot;-&quot;.repeat(70));
  REPLICA_MAPPING.forEach(replica =&gt; {
    print(`• ${replica.source} → ${replica.port} (${replica.label})`);
  });
  
  print(&quot;\n⚖️  PESOS CONFIGURADOS:&quot;);
  print(&quot;-&quot;.repeat(70));
  Object.entries(CONFIG.weights).forEach((&#x5B;port, weight]) =&gt; {
    print(`• Porta ${port}: Peso ${weight}`);
  });
  
  print(&quot;\n🚀 PRÓXIMOS PASSOS:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Configurar MySQL Router para balanceamento de carga&quot;);
  print(&quot;• Implementar monitoramento e alertas&quot;);
  print(&quot;• Configurar backups automatizados&quot;);
  print(&quot;• Testar failover e recuperação&quot;);
  print(&quot;• Ajustar configurações de performance conforme necessário&quot;);
  
  print(&quot;\n💡 COMANDOS ÚTEIS:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Status do cluster: cluster.status({extended: true})&quot;);
  print(&quot;• Conectar ao cluster: shell.connect('root@localhost:3307')&quot;);
  print(`• Obter cluster: dba.getCluster('${CONFIG.clusterName}')`);
  print(&quot;• Rescan do cluster: cluster.rescan()&quot;);
  
  printSuccess(&quot;Script executado com sucesso!&quot;);
} catch (mainErr) {
  // ==============================================
  // EMERGENCY ERROR HANDLING
  // ==============================================
  print(&quot;\n&quot; + &quot;🚨&quot;.repeat(80));
  print(&quot;ERRO CRÍTICO DETECTADO - INICIANDO LIMPEZA DE EMERGÊNCIA&quot;);
  print(&quot;🚨&quot;.repeat(80));
  
  printError(`ERRO PRINCIPAL: ${mainErr.message}`);
  printError(`STACK TRACE: ${mainErr.stack || 'N/A'}`);
  
  printInfo(&quot;Executando limpeza de emergência...&quot;);
  
  try {
    // Emergency cluster dissolution
    try {
      const emergencyCluster = dba.getCluster();
      if (emergencyCluster) {
        emergencyCluster.dissolve({ force: true });
        printInfo(&quot;Cluster dissolvido durante limpeza de emergência&quot;);
      }
    } catch (e) {
      printWarning(`Erro ao dissolver cluster: ${e.message}`);
    }
    
    // Kill and delete all sandbox instances safely
    printInfo(&quot;Removendo todas as instâncias sandbox...&quot;);
    CONFIG.ports.forEach(port =&gt; {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    
    // Safe directory cleanup
    safeCleanDirectories();
    
    printSuccess(&quot;Limpeza de emergência concluída&quot;);
    
  } catch (emergencyErr) {
    printError(`Erro durante limpeza de emergência: ${emergencyErr.message}`);
  }
  
  print(&quot;\n💡 SUGESTÕES PARA RESOLUÇÃO:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Verifique se as portas estão disponíveis: netstat -an | findstr :330&quot;);
  print(&quot;• Confirme se o MySQL Shell tem permissões adequadas&quot;);
  print(&quot;• Verifique a conectividade de rede&quot;);
  print(&quot;• Analise os logs do MySQL para erros específicos&quot;);
  print(&quot;• Execute o script novamente após corrigir os problemas&quot;);
  print(&quot;• Verifique se há processos MySQL em execução: tasklist | findstr mysql&quot;);
  print(`• Limpe manualmente o diretório: rmdir /s /q &quot;${CONFIG.sandboxPath}&quot;`);
  
  // Re-throw the error for debugging
  throw mainErr;
}
</pre></div>


<p class="has-large-font-size"><strong>LOG &#8211; Output:</strong></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
PS C:\Users\dbabrabo-666&gt; mysqlsh --uri root:Welcome1@localhost:3306 --file=&quot;C:\Users\dbabrabo-666\Documents\ACACIOLR-DBA\mysql_full_innodb_cluster_w_replica_setup_mb_v4.js&quot; --log-level=8 --log-file=&quot;C:\temp\cluster_setup.log&quot;
WARNING: Using a password on the command line interface can be insecure.
================================================================================PHASE 0: LIMPEZA COMPLETA DO AMBIENTE================================================================================ℹ️  Verificando cluster existente...⚠️  Nenhum cluster ativo encontrado: This function is not available through a session to a standalone instance (metadata exists, instance belongs to that metadata, but GR is not active)ℹ️  Removendo todas as instâncias sandbox...
Killing MySQL instance...
⚠️  Instância 3307 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3307 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3310 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3310 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3320 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3320 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3330 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3330 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3340 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3340 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3350 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3350 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3360 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3360 não existe para remoção
Killing MySQL instance...
⚠️  Instância 3370 não estava ativa ou não existe
Deleting MySQL instance...
⚠️  Instância 3370 não existe para remoçãoExecutando: if exist &quot;C:\Users\dbabrabo-666\MySQL\mysql-sandboxes&quot; rmdir /s /q &quot;C:\Users\dbabrabo-666\MySQL\mysql-sandboxes&quot;ℹ️  Comando de limpeza preparado - execute manualmente se necessário✅ Preparação de limpeza de diretórios concluída⏳ Aguardando 5 segundos...✅ LIMPEZA CONCLUÍDA
================================================================================PHASE 1: CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================ℹ️  Criando instância primária 3307...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3307
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3307 successfully deployed and started.
Use shell.connect('root@localhost:3307') to connect to the instance.
✅ Instância primária 3307 criada e pronta (1/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3310...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3310
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310') to connect to the instance.
✅ Instância primária 3310 criada e pronta (2/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3320...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3320
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320') to connect to the instance.
✅ Instância primária 3320 criada e pronta (3/4)⏳ Aguardando 2 segundos...ℹ️  Criando instância primária 3330...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3330
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330') to connect to the instance.
✅ Instância primária 3330 criada e pronta (4/4)⏳ Aguardando 2 segundos...
================================================================================PHASE 2: CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================ℹ️  Configurando instância 3307 para clustering...Configuring local MySQL instance listening at port 3307 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3307
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3307' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3307 configurada (1/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3310 para clustering...Configuring local MySQL instance listening at port 3310 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3310
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3310' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3310 configurada (2/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3320 para clustering...Configuring local MySQL instance listening at port 3320 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3320
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3320' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3320 configurada (3/4)⏳ Aguardando 1 segundos...ℹ️  Configurando instância 3330 para clustering...Configuring local MySQL instance listening at port 3330 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3330
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3330' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
✅ Instância 3330 configurada (4/4)⏳ Aguardando 1 segundos...
================================================================================PHASE 3: CRIAÇÃO DO CLUSTER INNODB================================================================================ℹ️  Conectando à instância primária (3307)...✅ Conectado à instância primáriaℹ️  Verificando se cluster 'my-cluster-db-v5' já existe...ERROR: Command not available on an unmanaged standalone instance.
ℹ️  Criando novo cluster 'my-cluster-db-v5'...A new InnoDB Cluster will be created on instance '127.0.0.1:3307'.
Validating instance configuration at localhost:3307...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3307
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3307'. Use the localAddress option to override.
* Checking connectivity and SSL configuration...
Creating InnoDB Cluster 'my-cluster-db-v5' on '127.0.0.1:3307'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
✅ Cluster 'my-cluster-db-v5' criado com sucessoℹ️  Aguardando estabilização do cluster primário...⏳ Aguardando 30 segundos...ℹ️  Status do cluster: OK_NO_TOLERANCE⚠️  Cluster primário pode não estar completamente estável
================================================================================PHASE 4: ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS================================================================================ℹ️  Adicionando instância 3310 ao cluster...❌ Erro ao adicionar instância 3310: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Adicionando instância 3320 ao cluster...❌ Erro ao adicionar instância 3320: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Adicionando instância 3330 ao cluster...❌ Erro ao adicionar instância 3330: Argument #2: Invalid options: waitRecovery⚠️  Continuando com as próximas instâncias...ℹ️  Aguardando sincronização completa do cluster...⏳ Aguardando 10 segundos...
================================================================================PHASE 5: CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS================================================================================Setting the value of 'memberWeight' to '100' in the instance: '127.0.0.1:3307' ...
Successfully set the value of 'memberWeight' to '100' in the cluster member: '127.0.0.1:3307'.
✅ Peso 100 configurado para instância 3307⚠️  Erro ao configurar peso para 3310: The instance '127.0.0.1:3310' does not belong to the cluster.⚠️  Erro ao configurar peso para 3320: The instance '127.0.0.1:3320' does not belong to the cluster.⚠️  Erro ao configurar peso para 3330: The instance '127.0.0.1:3330' does not belong to the cluster.✅ Configuração de pesos concluída
================================================================================PHASE 6: CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA================================================================================ℹ️  Processando réplica 3340 para fonte 3307...ℹ️  - Criando instância réplica 3340...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3340
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3340 successfully deployed and started.
Use shell.connect('root@localhost:3340') to connect to the instance.
ℹ️  - Configurando instância réplica 3340...Configuring local MySQL instance listening at port 3340 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3340
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3340' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3307...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3340 como réplica de leitura...Setting up '127.0.0.1:3340' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3340...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3340
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3340' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3340 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ============================================================    0%  In Progress
    REDO COPY  ============================================================    0%  Not Started
NOTE: 127.0.0.1:3340 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3340 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.95 MB transferred in about 1 second (~74.95 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3340 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3340' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3340' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3340 configurada para fonte 3307 (1/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3350 para fonte 3310...ℹ️  - Criando instância réplica 3350...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3350
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3350 successfully deployed and started.
Use shell.connect('root@localhost:3350') to connect to the instance.
ℹ️  - Configurando instância réplica 3350...Configuring local MySQL instance listening at port 3350 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3350
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3350' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3310...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3350 como réplica de leitura...Setting up '127.0.0.1:3350' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3350...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3350
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3350' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3350 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3350 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3350 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.88 MB transferred in about 1 second (~74.88 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3350 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3350' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3350' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3350 configurada para fonte 3310 (2/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3360 para fonte 3320...ℹ️  - Criando instância réplica 3360...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3360
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3360 successfully deployed and started.
Use shell.connect('root@localhost:3360') to connect to the instance.
ℹ️  - Configurando instância réplica 3360...Configuring local MySQL instance listening at port 3360 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3360
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3360' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3320...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3360 como réplica de leitura...Setting up '127.0.0.1:3360' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3360...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3360
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3360' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3360 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
NOTE: 127.0.0.1:3360 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:3360 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3360 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3360' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3360' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3360 configurada para fonte 3320 (3/4)⏳ Aguardando 5 segundos...ℹ️  Processando réplica 3370 para fonte 3330...ℹ️  - Criando instância réplica 3370...A new MySQL sandbox instance will be created on this host in
C:\Users\dbabrabo-666\MySQL\mysql-sandboxes\3370
Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.
Deploying new MySQL instance...
Instance localhost:3370 successfully deployed and started.
Use shell.connect('root@localhost:3370') to connect to the instance.
ℹ️  - Configurando instância réplica 3370...Configuring local MySQL instance listening at port 3370 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3370
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:3370' is valid for InnoDB Cluster usage.
Successfully enabled parallel appliers.
ℹ️  - Criando usuário de replicação na fonte 3330...⏳ Aguardando 3 segundos...ℹ️  - Adicionando 3370 como réplica de leitura...Setting up '127.0.0.1:3370' as a Read Replica of Cluster 'my-cluster-db-v5'.
Validating instance configuration at localhost:3370...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
This instance reports its own address as 127.0.0.1:3370
Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:3370' has not been pre-provisioned (GTID set is empty).
Clone based recovery selected through the recoveryMethod option
* Checking connectivity and SSL configuration...
Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:3370 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)
* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3370 to 127.0.0.1:3307
* Waiting for Read-Replica '127.0.0.1:3370' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%
'127.0.0.1:3370' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.
✅ Réplica 3370 configurada para fonte 3330 (4/4)⏳ Aguardando 5 segundos...
================================================================================PHASE 7: VERIFICAÇÃO FINAL E STATUS================================================================================ℹ️  Aguardando estabilização final...⏳ Aguardando 10 segundos...
📊 STATUS COMPLETO DO CLUSTER:======================================================================={
  &quot;clusterName&quot;: &quot;my-cluster-db-v5&quot;,
  &quot;defaultReplicaSet&quot;: {
    &quot;GRProtocolVersion&quot;: &quot;8.0.27&quot;,
    &quot;communicationStack&quot;: &quot;MYSQL&quot;,
    &quot;groupName&quot;: &quot;48fcdfde-4c7a-11f0-9ee3-18a59cb32d88&quot;,
    &quot;groupViewChangeUuid&quot;: &quot;AUTOMATIC&quot;,
    &quot;groupViewId&quot;: &quot;17502748550958043:1&quot;,
    &quot;name&quot;: &quot;default&quot;,
    &quot;paxosSingleLeader&quot;: &quot;OFF&quot;,
    &quot;primary&quot;: &quot;127.0.0.1:3307&quot;,
    &quot;ssl&quot;: &quot;REQUIRED&quot;,
    &quot;status&quot;: &quot;OK_NO_TOLERANCE&quot;,
    &quot;statusText&quot;: &quot;Cluster is NOT tolerant to any failures.&quot;,
    &quot;topology&quot;: {
      &quot;127.0.0.1:3307&quot;: {
        &quot;address&quot;: &quot;127.0.0.1:3307&quot;,
        &quot;applierWorkerThreads&quot;: 4,
        &quot;fenceSysVars&quot;: &#x5B;],
        &quot;memberId&quot;: &quot;215a2338-4c7a-11f0-8f41-18a59cb32d88&quot;,
        &quot;memberRole&quot;: &quot;PRIMARY&quot;,
        &quot;memberState&quot;: &quot;ONLINE&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {
          &quot;Replica_Primary_3307&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3340&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;PRIMARY&quot;
            ],
            &quot;replicationSsl&quot;: &quot;TLS_AES_128_GCM_SHA256 TLSv1.3&quot;,
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;9.3.0&quot;
          },
          &quot;Replica_Quaternary_3330&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3370&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;PRIMARY&quot;
            ],
            &quot;replicationSsl&quot;: &quot;TLS_AES_128_GCM_SHA256 TLSv1.3&quot;,
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;9.3.0&quot;
          },
          &quot;Replica_Secondary_3310&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3350&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;PRIMARY&quot;
            ],
            &quot;replicationSsl&quot;: &quot;TLS_AES_128_GCM_SHA256 TLSv1.3&quot;,
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;9.3.0&quot;
          },
          &quot;Replica_Tertiary_3320&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3360&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;PRIMARY&quot;
            ],
            &quot;replicationSsl&quot;: &quot;TLS_AES_128_GCM_SHA256 TLSv1.3&quot;,
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;9.3.0&quot;
          }
        },
        &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;9.3.0&quot;
      }
    },
    &quot;topologyMode&quot;: &quot;Single-Primary&quot;
  },
  &quot;groupInformationSourceMember&quot;: &quot;127.0.0.1:3307&quot;,
  &quot;metadataVersion&quot;: &quot;2.3.0&quot;
}
🎯 ANÁLISE DO STATUS:• Status Geral: OK_NO_TOLERANCE• Modo: Single-Primary• SSL Mode: REQUIRED
📊 RESUMO POR STATUS:• ONLINE: 1 instância(s)
🔗 TESTE DE CONECTIVIDADE:=======================================================================✅ Porta 3307: Conectividade OK - Server ID: 3820020054✅ Porta 3310: Conectividade OK - Server ID: 3359029909✅ Porta 3320: Conectividade OK - Server ID: 1516761045✅ Porta 3330: Conectividade OK - Server ID: 272308050
🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉CONFIGURAÇÃO CONCLUÍDA COM SUCESSO!🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉
📋 RESUMO DA CONFIGURAÇÃO:----------------------------------------------------------------------• Cluster Name: my-cluster-db-v5• Instâncias Primárias: 4 (3307, 3310, 3320, 3330)• Réplicas de Leitura: 4 (3340, 3350, 3360, 3370)• Total de Instâncias: 8• Arquitetura: 4-Node Primary + 4 Read Replicas (1:1)
🔗 MAPEAMENTO DE RÉPLICAS:----------------------------------------------------------------------• 3307 → 3340 (Replica_Primary_3307)• 3310 → 3350 (Replica_Secondary_3310)• 3320 → 3360 (Replica_Tertiary_3320)• 3330 → 3370 (Replica_Quaternary_3330)
⚖️  PESOS CONFIGURADOS:----------------------------------------------------------------------• Porta 3307: Peso 100• Porta 3310: Peso 60• Porta 3320: Peso 40• Porta 3330: Peso 20
🚀 PRÓXIMOS PASSOS:----------------------------------------------------------------------• Configurar MySQL Router para balanceamento de carga• Implementar monitoramento e alertas• Configurar backups automatizados• Testar failover e recuperação• Ajustar configurações de performance conforme necessário
💡 COMANDOS ÚTEIS:----------------------------------------------------------------------• Status do cluster: cluster.status({extended: true})• Conectar ao cluster: shell.connect('root@localhost:3307')• Obter cluster: dba.getCluster('my-cluster-db-v5')• Rescan do cluster: cluster.rescan()✅ Script executado com sucesso!
PS C:\Users\dbabrabo-666&gt;
</pre></div>


<p class="has-large-font-size">Versão do script para Linux/macOS/Unix</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
// ============================================================================================
// MYSQL INNODB CLUSTER - ENTERPRISE PRODUCTION SETUP
// 4-NODE CLUSTER WITH 1:1 READ REPLICAS
// COMPLETE CLEANUP + VERIFICATION + ERROR HANDLING
// ============================================================================================
// 
// DESCRIÇÃO:
//   Script automatizado para criar um cluster MySQL InnoDB com 4 nós primários e 4 réplicas
//   de leitura (1:1). Inclui limpeza completa, validação e tratamento de erros.
//
// REQUISITOS:
//   - MySQL Shell 8.0+
//   - Sistema Operacional: Linux/macOS
//   - RAM: Mínimo 2GB livre
//   - Disco: Mínimo 5GB livre
//   - Portas: 3307-3370 devem estar livres
//
// AUTOR: Acacio LR - DBA
// ============================================================================================
// COMO USAR ESTE SCRIPT:
// ============================================================================================
//
// 1. SALVAR O SCRIPT:
//    Salve este arquivo como 'mysql_innodb_cluster_macOS_mb.js' em seu diretório home:
//    $ nano ~/mysql_innodb_cluster_macOS_mb.js
//    (cole o conteúdo e salve com Ctrl+X, Y, Enter)
//
// 2. EXECUTAR O SCRIPT (escolha uma opção):
//
//    OPÇÃO A - Execução Simples:
//    $ mysqlsh --file ~/mysql_innodb_cluster_macOS_mb.js
//
//    OPÇÃO B - Com Log Detalhado:
//    $ mysqlsh --file mysql_innodb_cluster_macOS_mb.js --log-level=8 --log-file=/tmp/cluster.log
//
//    OPÇÃO C - Dentro do MySQL Shell:
//    $ mysqlsh
//    MySQL JS&gt; \source ~/mysql_innodb_cluster_macOS_mb.js
//
// 3. MONITORAR EXECUÇÃO (em outro terminal):
//    $ tail -f /tmp/cluster_setup.log
// 
// ============================================================================================

// Configuration Constants
const CONFIG = {
  ports: &#x5B;3307, 3310, 3320, 3330, 3340, 3350, 3360, 3370],
  primaryPorts: &#x5B;3307, 3310, 3320, 3330],
  replicaPorts: &#x5B;3340, 3350, 3360, 3370],
  password: 'Welcome1',
  clusterName: 'my-cluster-db-v5',
  sandboxPath: '/Users/acaciolr/mysql-sandboxes',
  replicationUser: {
    username: 'repl',
    password: 'Welcome1'
  },
  weights: {
    3307: 100,
    3310: 60,
    3320: 40,
    3330: 20
  },
  timeouts: {
    clusterCreation: 30,
    instanceAdd: 20,
    stabilization: 15,
    recovery: 10
  }
};

const firstPrimaryPort = CONFIG.primaryPorts&#x5B;0];

// Replica mapping (1:1 relationship)
const REPLICA_MAPPING = &#x5B;
  { port: 3340, source: 3307, label: 'Replica_Primary_3307' },
  { port: 3350, source: 3310, label: 'Replica_Secondary_3310' },
  { port: 3360, source: 3320, label: 'Replica_Tertiary_3320' },
  { port: 3370, source: 3330, label: 'Replica_Quaternary_3330' }
];

// Utility Functions
function printPhase(phase, description) {
  const separator = '='.repeat(80);
  print(&quot;\n&quot; + separator);
  print(&quot;PHASE &quot; + phase + &quot;: &quot; + description.toUpperCase());
  print(separator + &quot;\n&quot;);
}

function printSuccess(message) {
  print(&quot;✅ &quot; + message);
}

function printWarning(message) {
  print(&quot;⚠️  &quot; + message);
}

function printError(message) {
  print(&quot;❌ &quot; + message);
}

function printInfo(message) {
  print(&quot;ℹ️  &quot; + message);
}

function sleep(seconds) {
  print(&quot;⏳ Aguardando &quot; + seconds + &quot; segundos...\n&quot;);
  os.sleep(seconds);
}

function waitForInstanceReady(port, maxRetries = 15) {
  let retries = 0;
  while (retries &amp;lt; maxRetries) {
    try {
      const testSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
      testSession.runSql(&quot;SELECT 1&quot;);
      testSession.close();
      return true;
    } catch (e) {
      retries++;
      print(&quot;   Tentativa &quot; + retries + &quot;/&quot; + maxRetries + &quot; - Aguardando instância &quot; + port + &quot;...&quot;);
      sleep(3);
    }
  }
  return false;
}

function checkClusterHealth(cluster) {
  try {
    const status = cluster.status();
    const healthy = status.defaultReplicaSet.status === 'OK' || 
                   status.defaultReplicaSet.status === 'OK_NO_TOLERANCE' ||
                   status.defaultReplicaSet.status === 'OK_PARTIAL';
    printInfo(&quot;Status do cluster: &quot; + status.defaultReplicaSet.status);
    return healthy;
  } catch (e) {
    printWarning(&quot;Erro ao verificar saúde do cluster: &quot; + e.message);
    return false;
  }
}

function safeKillSandbox(port) {
  try {
    dba.killSandboxInstance(port);
    printInfo(&quot;Instância &quot; + port + &quot; encerrada&quot;);
  } catch (e) {
    if (e.message.includes(&quot;Unable to find pid file&quot;) || 
        e.message.includes(&quot;does not exist&quot;) ||
        e.message.includes(&quot;not found&quot;)) {
      // Silently ignore if instance doesn't exist
    } else {
      printWarning(&quot;Erro ao encerrar &quot; + port + &quot;: &quot; + e.message);
    }
  }
}

function safeDeleteSandbox(port) {
  try {
    dba.deleteSandboxInstance(port);
    printInfo(&quot;Instância &quot; + port + &quot; removida&quot;);
  } catch (e) {
    if (e.message.includes(&quot;does not exist&quot;) || 
        e.message.includes(&quot;not found&quot;)) {
      // Silently ignore if instance doesn't exist
    } else {
      printWarning(&quot;Erro ao remover &quot; + port + &quot;: &quot; + e.message);
    }
  }
}

function safeCleanDirectories() {
  try {
    const command = &quot;rm -rf &quot; + CONFIG.sandboxPath;
    printInfo(&quot;Comando de limpeza preparado: &quot; + command);
    printSuccess(&quot;Preparação de limpeza de diretórios concluída&quot;);
  } catch (e) {
    printWarning(&quot;Não foi possível limpar diretórios automaticamente: &quot; + e.message);
    printInfo(&quot;Execute manualmente: rm -rf &quot; + CONFIG.sandboxPath);
  }
}

// Main execution wrapped in try-catch
try {
  
  // ==============================================
  // PHASE 0: COMPREHENSIVE CLEANUP
  // ==============================================
  printPhase(0, &quot;LIMPEZA COMPLETA DO AMBIENTE&quot;);
  
  try {
    // Dissolve existing cluster
    try {
      printInfo(&quot;Verificando cluster existente...&quot;);
      const existingCluster = dba.getCluster();
      if (existingCluster) {
        printInfo(&quot;Dissolvendo cluster existente...&quot;);
        existingCluster.dissolve({ force: true });
        printSuccess(&quot;Cluster existente dissolvido com sucesso\n&quot;);
        sleep(3);
      }
    } catch (e) {
      printWarning(&quot;Nenhum cluster ativo encontrado: Iniciando nova configuração\n&quot;);
    }
    
    // Kill and delete all sandbox instances
    printInfo(&quot;Removendo todas as instâncias sandbox...&quot;);
    CONFIG.ports.forEach(port =&gt; {
      safeKillSandbox(port);
    });
    
    print(&quot;&quot;); // Linha em branco
    
    CONFIG.ports.forEach(port =&gt; {
      safeDeleteSandbox(port);
    });
    
    print(&quot;&quot;); // Linha em branco
    
    // Clean sandbox directories safely
    safeCleanDirectories();
    
    sleep(CONFIG.timeouts.recovery);
    printSuccess(&quot;LIMPEZA CONCLUÍDA\n&quot;);
    
  } catch (cleanupErr) {
    printError(&quot;Erro durante cleanup: &quot; + cleanupErr.message);
  }
  
  // ==============================================
  // PHASE 1: DEPLOY PRIMARY INSTANCES
  // ==============================================
  printPhase(1, &quot;CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS&quot;);
  
  CONFIG.primaryPorts.forEach((port, index) =&gt; {
    try {
      printInfo(&quot;Criando instância primária &quot; + port + &quot;...&quot;);
      
      dba.deploySandboxInstance(port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      if (waitForInstanceReady(port)) {
        printSuccess(&quot;Instância primária &quot; + port + &quot; criada e pronta (&quot; + (index + 1) + &quot;/&quot; + CONFIG.primaryPorts.length + &quot;)\n&quot;);
      } else {
        throw new Error(&quot;Instância &quot; + port + &quot; não ficou pronta no tempo esperado&quot;);
      }
      
      sleep(2);
    } catch (e) {
      if (e.message.includes(&quot;already exists&quot;)) {
        printWarning(&quot;Instância &quot; + port + &quot; já existe\n&quot;);
      } else {
        printError(&quot;Erro ao criar instância &quot; + port + &quot;: &quot; + e.message);
        throw e;
      }
    }
  });
  
  // ==============================================
  // PHASE 2: CONFIGURE PRIMARY INSTANCES
  // ==============================================
  printPhase(2, &quot;CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS&quot;);
  
  CONFIG.primaryPorts.forEach((port, index) =&gt; {
    try {
      printInfo(&quot;Configurando instância &quot; + port + &quot; para clustering...&quot;);
      
      dba.configureInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      printSuccess(&quot;Instância &quot; + port + &quot; configurada (&quot; + (index + 1) + &quot;/&quot; + CONFIG.primaryPorts.length + &quot;)\n&quot;);
      sleep(1);
    } catch (e) {
      printError(&quot;Erro ao configurar instância &quot; + port + &quot;: &quot; + e.message);
      throw e;
    }
  });
  
  // ==============================================
  // PHASE 3: CLUSTER CREATION
  // ==============================================
  printPhase(3, &quot;CRIAÇÃO DO CLUSTER INNODB&quot;);
  
  let cluster;
  try {
    printInfo(&quot;Conectando à instância primária (&quot; + firstPrimaryPort + &quot;)...&quot;);
    shell.connect(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + firstPrimaryPort);
    printSuccess(&quot;Conectado à instância primária\n&quot;);
    
    try {
      printInfo(&quot;Verificando se cluster '&quot; + CONFIG.clusterName + &quot;' já existe...&quot;);
      cluster = dba.getCluster(CONFIG.clusterName);
      printSuccess(&quot;Cluster '&quot; + CONFIG.clusterName + &quot;' existente carregado\n&quot;);
    } catch {
      printInfo(&quot;Criando novo cluster '&quot; + CONFIG.clusterName + &quot;'...&quot;);
      
      cluster = dba.createCluster(CONFIG.clusterName, {
        multiPrimary: false,
        force: true,
        gtidSetIsComplete: true
      });
      
      printSuccess(&quot;Cluster '&quot; + CONFIG.clusterName + &quot;' criado com sucesso\n&quot;);
      printInfo(&quot;Aguardando estabilização do cluster primário...\n&quot;);
      sleep(CONFIG.timeouts.clusterCreation);
      
      if (checkClusterHealth(cluster)) {
        printSuccess(&quot;Cluster primário está funcionando corretamente\n&quot;);
      } else {
        printWarning(&quot;Cluster primário pode não estar completamente estável\n&quot;);
      }
    }
    
  } catch (e) {
    printError(&quot;Erro na criação/carregamento do cluster: &quot; + e.message);
    throw e;
  }
  
  // ==============================================
  // PHASE 4: ADD SECONDARY INSTANCES TO CLUSTER
  // ==============================================
  printPhase(4, &quot;ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS AO CLUSTER&quot;);
  
  const secondaryPorts = CONFIG.primaryPorts.slice(1);
  let addedCount = 0;
  
  secondaryPorts.forEach((port, index) =&gt; {
    let retryCount = 0;
    const maxRetries = 3;
    let added = false;
    
    while (!added &amp;amp;&amp;amp; retryCount &amp;lt; maxRetries) {
      try {
        retryCount++;
        printInfo(&quot;Adicionando instância &quot; + port + &quot; ao cluster (tentativa &quot; + retryCount + &quot;/&quot; + maxRetries + &quot;)...&quot;);
        
        const currentStatus = cluster.status();
        const instanceKey = &quot;127.0.0.1:&quot; + port;
        
        if (currentStatus.defaultReplicaSet.topology&#x5B;instanceKey]) {
          printWarning(&quot;Instância &quot; + port + &quot; já está no cluster\n&quot;);
          added = true;
          addedCount++;
          break;
        }
        
        // ADD INSTANCE TRADICIONAL COM CLONE
        cluster.addInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port, {
          recoveryMethod: 'clone'
        });
        
        printSuccess(&quot;Instância &quot; + port + &quot; adicionada ao cluster (&quot; + (index + 1) + &quot;/&quot; + secondaryPorts.length + &quot;)\n&quot;);
        added = true;
        addedCount++;
        
        printInfo(&quot;Aguardando sincronização da instância &quot; + port + &quot;...\n&quot;);
        sleep(CONFIG.timeouts.instanceAdd);
        
        // Verificar status após adicionar
        let instanceOnline = false;
        let checkCount = 0;
        const maxChecks = 10;
        
        while (!instanceOnline &amp;amp;&amp;amp; checkCount &amp;lt; maxChecks) {
          checkCount++;
          const status = cluster.status();
          const instanceStatus = status.defaultReplicaSet.topology&#x5B;instanceKey];
          
          if (instanceStatus &amp;amp;&amp;amp; instanceStatus.status === 'ONLINE') {
            printSuccess(&quot;Instância &quot; + port + &quot; está ONLINE no cluster\n&quot;);
            instanceOnline = true;
          } else if (instanceStatus &amp;amp;&amp;amp; instanceStatus.status === 'RECOVERING') {
            printInfo(&quot;Instância &quot; + port + &quot; está em RECOVERING, aguardando... (&quot; + checkCount + &quot;/&quot; + maxChecks + &quot;)\n&quot;);
            sleep(5);
          } else {
            printWarning(&quot;Instância &quot; + port + &quot; status: &quot; + (instanceStatus ? instanceStatus.status : &quot;DESCONHECIDO&quot;) + &quot;\n&quot;);
            sleep(5);
          }
        }
        
      } catch (e) {
        printError(&quot;Erro ao adicionar instância &quot; + port + &quot; (tentativa &quot; + retryCount + &quot;): &quot; + e.message + &quot;\n&quot;);
        
        if (retryCount &amp;lt; maxRetries) {
          printInfo(&quot;Tentando novamente em 10 segundos...\n&quot;);
          sleep(10);
          
          try {
            printInfo(&quot;Tentando rejoin da instância &quot; + port + &quot;...&quot;);
            cluster.rejoinInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
            printSuccess(&quot;Instância &quot; + port + &quot; rejoin bem-sucedido\n&quot;);
            added = true;
            addedCount++;
          } catch (rejoinErr) {
            printWarning(&quot;Rejoin falhou: &quot; + rejoinErr.message + &quot;\n&quot;);
          }
        }
      }
    }
    
    if (!added) {
      printError(&quot;Falha ao adicionar instância &quot; + port + &quot; após &quot; + maxRetries + &quot; tentativas&quot;);
      printWarning(&quot;Continuando com as próximas instâncias...\n&quot;);
    }
  });
  
  printInfo(&quot;Total de instâncias secundárias adicionadas: &quot; + addedCount + &quot;/&quot; + secondaryPorts.length);
  printInfo(&quot;Aguardando sincronização completa do cluster...\n&quot;);
  sleep(CONFIG.timeouts.stabilization);
  
  printInfo(&quot;Verificando status do cluster após adição de instâncias...&quot;);
  const clusterStatusAfterAdd = cluster.status();
  const topologyCount = Object.keys(clusterStatusAfterAdd.defaultReplicaSet.topology).length;
  printInfo(&quot;Total de nós no cluster: &quot; + topologyCount + &quot;\n&quot;);
  
  if (topologyCount &amp;lt; 4) {
    printWarning(&quot;ATENÇÃO: Cluster tem apenas &quot; + topologyCount + &quot; nós, esperado 4&quot;);
    printInfo(&quot;Tentando rescan do cluster...\n&quot;);
    cluster.rescan();
  }
  
  // ==============================================
  // PHASE 5: CONFIGURE INSTANCE WEIGHTS
  // ==============================================
  printPhase(5, &quot;CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS&quot;);
  
  try {
    Object.entries(CONFIG.weights).forEach((&#x5B;port, weight]) =&gt; {
      try {
        cluster.setInstanceOption(&quot;127.0.0.1:&quot; + port, 'memberWeight', weight);
        printSuccess(&quot;Peso &quot; + weight + &quot; configurado para instância &quot; + port);
      } catch (e) {
        printWarning(&quot;Erro ao configurar peso para &quot; + port + &quot;: &quot; + e.message);
      }
    });
    print(&quot;&quot;); // Linha em branco
    printSuccess(&quot;Configuração de pesos concluída\n&quot;);
  } catch (e) {
    printWarning(&quot;Erro geral na configuração de pesos: &quot; + e.message + &quot;\n&quot;);
  }
  
  // ==============================================
  // PHASE 5.5: CREATE REPLICATION USERS ON PRIMARY
  // ==============================================
  printPhase(5.5, &quot;CRIAÇÃO DE USUÁRIOS DE REPLICAÇÃO&quot;);
  
  try {
    printInfo(&quot;Criando usuário de replicação na instância primária (3307)...&quot;);
    const primarySession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + firstPrimaryPort);
    
    // Criar usuário de replicação com todos os privilégios necessários
    primarySession.runSql(&quot;CREATE USER IF NOT EXISTS '&quot; + CONFIG.replicationUser.username + &quot;'@'%' IDENTIFIED BY '&quot; + CONFIG.replicationUser.password + &quot;'&quot;);
    primarySession.runSql(&quot;GRANT REPLICATION SLAVE ON *.* TO '&quot; + CONFIG.replicationUser.username + &quot;'@'%'&quot;);
    primarySession.runSql(&quot;GRANT BACKUP_ADMIN ON *.* TO '&quot; + CONFIG.replicationUser.username + &quot;'@'%'&quot;);
    primarySession.runSql(&quot;GRANT CLONE_ADMIN ON *.* TO '&quot; + CONFIG.replicationUser.username + &quot;'@'%'&quot;);
    primarySession.runSql(&quot;GRANT SELECT ON *.* TO '&quot; + CONFIG.replicationUser.username + &quot;'@'%'&quot;);
    primarySession.runSql(&quot;FLUSH PRIVILEGES&quot;);
    primarySession.close();
    
    printSuccess(&quot;Usuário de replicação criado com sucesso na instância primária\n&quot;);
    
    // Aguardar propagação para os nós secundários
    printInfo(&quot;Aguardando propagação do usuário para os nós secundários...\n&quot;);
    sleep(5);
    
    // Verificar se o usuário foi propagado para os nós secundários
    const secondaryPortsCheck = CONFIG.primaryPorts.slice(1);
    secondaryPortsCheck.forEach(port =&gt; {
      try {
        const testSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
        const result = testSession.runSql(&quot;SELECT user FROM mysql.user WHERE user = '&quot; + CONFIG.replicationUser.username + &quot;'&quot;);
        if (result.fetchOne()) {
          printSuccess(&quot;Usuário de replicação confirmado no nó &quot; + port);
        } else {
          printWarning(&quot;Usuário de replicação não encontrado no nó &quot; + port);
        }
        testSession.close();
      } catch (e) {
        printWarning(&quot;Não foi possível verificar usuário no nó &quot; + port + &quot;: &quot; + e.message);
      }
    });
    print(&quot;&quot;); // Linha em branco
    
  } catch (e) {
    printError(&quot;Erro ao criar usuário de replicação: &quot; + e.message);
    printInfo(&quot;Continuando sem usuário de replicação dedicado...\n&quot;);
  }
  
  // ==============================================
  // PHASE 6: DEPLOY AND CONFIGURE READ REPLICAS
  // ==============================================
  printPhase(6, &quot;CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA&quot;);
  
  // Primeiro, vamos verificar quais nós estão realmente no cluster
  const currentClusterStatus = cluster.status();
  printInfo(&quot;Verificando nós disponíveis no cluster para réplicas...\n&quot;);
  
  for (let index = 0; index &amp;lt; REPLICA_MAPPING.length; index++) {
    const replica = REPLICA_MAPPING&#x5B;index];
    
    try {
      printInfo(&quot;Processando réplica &quot; + replica.port + &quot; para fonte &quot; + replica.source + &quot;...\n&quot;);
      
      // Verificar se a fonte está disponível primeiro
      const sourceKey = &quot;127.0.0.1:&quot; + replica.source;
      const sourceNode = currentClusterStatus.defaultReplicaSet.topology&#x5B;sourceKey];
      
      // Se a fonte não está no cluster, pular esta réplica
      if (!sourceNode) {
        printWarning(&quot;Nó fonte &quot; + replica.source + &quot; não está no cluster, pulando réplica &quot; + replica.port + &quot;\n&quot;);
        continue;
      }
      
      // Se a fonte não está ONLINE, pular esta réplica
      if (sourceNode.status !== 'ONLINE') {
        printWarning(&quot;Nó fonte &quot; + replica.source + &quot; está &quot; + sourceNode.status + &quot;, pulando réplica &quot; + replica.port + &quot;\n&quot;);
        continue;
      }
      
      printInfo(&quot;Nó fonte &quot; + replica.source + &quot; está ONLINE, criando réplica &quot; + replica.port + &quot;...\n&quot;);
      
      // PASSO 1: Criar a instância réplica
      printInfo(&quot;- Criando instância réplica &quot; + replica.port + &quot;...&quot;);
      dba.deploySandboxInstance(replica.port, { 
        password: CONFIG.password,
        sandboxDir: CONFIG.sandboxPath
      });
      
      if (!waitForInstanceReady(replica.port)) {
        throw new Error(&quot;Réplica &quot; + replica.port + &quot; não ficou pronta&quot;);
      }
      
      // PASSO 2: Configurar a instância réplica
      printInfo(&quot;- Configurando instância réplica &quot; + replica.port + &quot;...&quot;);
      dba.configureInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + replica.port, { 
        clusterAdmin: 'root',
        restart: false
      });
      
      // PASSO 3: Se necessário, desabilitar temporariamente super-read-only no nó fonte
      let needsReadOnlyDisable = false;
      if (replica.source !== firstPrimaryPort) {
        needsReadOnlyDisable = true;
        try {
          printInfo(&quot;- Desabilitando temporariamente super-read-only no nó &quot; + replica.source + &quot;...&quot;);
          const sourceSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + replica.source);
          sourceSession.runSql(&quot;SET GLOBAL super_read_only = 0&quot;);
          
          // Criar/verificar usuário de replicação no nó secundário
          sourceSession.runSql(&quot;CREATE USER IF NOT EXISTS '&quot; + CONFIG.replicationUser.username + &quot;'@'localhost' IDENTIFIED BY '&quot; + CONFIG.replicationUser.password + &quot;'&quot;);
          sourceSession.runSql(&quot;GRANT REPLICATION SLAVE ON *.* TO '&quot; + CONFIG.replicationUser.username + &quot;'@'localhost'&quot;);
          sourceSession.runSql(&quot;FLUSH PRIVILEGES&quot;);
          
          sourceSession.close();
          printSuccess(&quot;Super-read-only desabilitado temporariamente no nó &quot; + replica.source);
        } catch (e) {
          printWarning(&quot;Não foi possível desabilitar super-read-only no nó &quot; + replica.source + &quot;: &quot; + e.message);
          needsReadOnlyDisable = false;
        }
      }
      
      sleep(3);
      
      // PASSO 4: Adicionar a réplica ao cluster
      printInfo(&quot;- Adicionando &quot; + replica.port + &quot; como réplica de leitura anexada ao nó &quot; + replica.source + &quot;...&quot;);
      
      try {
        // Adicionar réplica especificamente ao nó fonte
        cluster.addReplicaInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + replica.port, {
          label: replica.label,
          recoveryMethod: 'clone',
          replicationSources: &#x5B;&quot;127.0.0.1:&quot; + replica.source]
        });
        
        printSuccess(&quot;Réplica &quot; + replica.port + &quot; configurada e anexada ao nó &quot; + replica.source + &quot; (&quot; + (index + 1) + &quot;/&quot; + REPLICA_MAPPING.length + &quot;)\n&quot;);
        sleep(CONFIG.timeouts.recovery);
        
      } catch (replicaErr) {
        printError(&quot;Erro ao adicionar réplica &quot; + replica.port + &quot;: &quot; + replicaErr.message);
        
        // Tentar método alternativo se falhar
        try {
          printInfo(&quot;Tentando método alternativo para adicionar réplica...&quot;);
          cluster.addReplicaInstance(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + replica.port, {
            label: replica.label,
            recoveryMethod: 'clone'
          });
          printSuccess(&quot;Réplica &quot; + replica.port + &quot; adicionada com método alternativo\n&quot;);
        } catch (altErr) {
          printError(&quot;Método alternativo também falhou: &quot; + altErr.message + &quot;\n&quot;);
        }
      }
      
      // PASSO 5: Reabilitar super-read-only se foi desabilitado
      if (needsReadOnlyDisable) {
        try {
          printInfo(&quot;- Reabilitando super-read-only no nó &quot; + replica.source + &quot;...&quot;);
          const sourceSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + replica.source);
          sourceSession.runSql(&quot;SET GLOBAL super_read_only = 1&quot;);
          sourceSession.close();
          printSuccess(&quot;Super-read-only reabilitado no nó &quot; + replica.source + &quot;\n&quot;);
        } catch (e) {
          printWarning(&quot;Não foi possível reabilitar super-read-only no nó &quot; + replica.source + &quot;: &quot; + e.message + &quot;\n&quot;);
        }
      }
      
    } catch (e) {
      printError(&quot;Erro na configuração da réplica &quot; + replica.port + &quot;: &quot; + e.message + &quot;\n&quot;);
      
      try {
        safeKillSandbox(replica.port);
        safeDeleteSandbox(replica.port);
        printInfo(&quot;Limpeza da réplica &quot; + replica.port + &quot; concluída\n&quot;);
      } catch (cleanupErr) {
        printWarning(&quot;Erro na limpeza da réplica &quot; + replica.port + &quot;: &quot; + cleanupErr.message + &quot;\n&quot;);
      }
    }
  }
  
  // PASSO FINAL: Garantir que super-read-only está habilitado em todos os nós secundários
  printInfo(&quot;Verificando configuração final de super-read-only...\n&quot;);
  const secondaryPortsFinal = CONFIG.primaryPorts.slice(1);
  secondaryPortsFinal.forEach(port =&gt; {
    try {
      const session = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
      const result = session.runSql(&quot;SELECT @@super_read_only&quot;);
      const row = result.fetchOne();
      if (row&#x5B;0] === 0) {
        session.runSql(&quot;SET GLOBAL super_read_only = 1&quot;);
        printSuccess(&quot;Super-read-only reabilitado no nó &quot; + port);
      } else {
        printInfo(&quot;Super-read-only já está habilitado no nó &quot; + port);
      }
      session.close();
    } catch (e) {
      printWarning(&quot;Não foi possível verificar super-read-only no nó &quot; + port + &quot;: &quot; + e.message);
    }
  });
  print(&quot;&quot;); // Linha em branco
  
  // ==============================================
  // PHASE 7: FINAL VERIFICATION AND STATUS
  // ==============================================
  printPhase(7, &quot;VERIFICAÇÃO FINAL E STATUS&quot;);
  
  try {
    printInfo(&quot;Aguardando estabilização final...\n&quot;);
    sleep(CONFIG.timeouts.stabilization);
    
    print(&quot;\n📊 STATUS COMPLETO DO CLUSTER:&quot;);
    print(&quot;=&quot; + &quot;=&quot;.repeat(70) + &quot;\n&quot;);
    
    try {
      const clusterStatus = cluster.status({extended: true});
      print(JSON.stringify(clusterStatus, null, 2));
      print(&quot;\n&quot;); // Linha em branco
      
      const defaultReplicaSet = clusterStatus.defaultReplicaSet;
      print(&quot;🎯 ANÁLISE DO STATUS:&quot;);
      print(&quot;• Status Geral: &quot; + defaultReplicaSet.status);
      print(&quot;• Modo: &quot; + (defaultReplicaSet.mode || 'Single-Primary'));
      print(&quot;• SSL Mode: &quot; + (defaultReplicaSet.ssl || 'N/A'));
      print(&quot;\n&quot;); // Linha em branco
      
      const topology = defaultReplicaSet.topology;
      const statusCount = {};
      let onlineNodes = 0;
      let totalReplicas = 0;
      const replicaDetails = &#x5B;];
      
      Object.entries(topology).forEach((&#x5B;key, instance]) =&gt; {
        const status = instance.status;
        statusCount&#x5B;status] = (statusCount&#x5B;status] || 0) + 1;
        
        if (status === 'ONLINE') {
          onlineNodes++;
        }
        
        if (instance.readReplicas) {
          const replicaCount = Object.keys(instance.readReplicas).length;
          totalReplicas += replicaCount;
          if (replicaCount &gt; 0) {
            Object.entries(instance.readReplicas).forEach((&#x5B;replicaKey, replicaInfo]) =&gt; {
              replicaDetails.push(&quot;  • &quot; + key + &quot; → &quot; + replicaKey + &quot; (&quot; + replicaInfo.status + &quot;)&quot;);
            });
          }
        }
      });
      
      print(&quot;📊 RESUMO POR STATUS:&quot;);
      Object.entries(statusCount).forEach((&#x5B;status, count]) =&gt; {
        print(&quot;• &quot; + status + &quot;: &quot; + count + &quot; instância(s)&quot;);
      });
      print(&quot;\n&quot;); // Linha em branco
      
      print(&quot;📈 ESTATÍSTICAS DO CLUSTER:&quot;);
      print(&quot;• Nós ONLINE no cluster: &quot; + onlineNodes);
      print(&quot;• Total de réplicas de leitura: &quot; + totalReplicas);
      print(&quot;• Tolerância a falhas: &quot; + (onlineNodes &gt;= 3 ? &quot;SIM&quot; : &quot;NÃO&quot;));
      print(&quot;\n&quot;); // Linha em branco
      
      if (replicaDetails.length &gt; 0) {
        print(&quot;📚 RÉPLICAS DE LEITURA ANEXADAS:&quot;);
        replicaDetails.forEach(detail =&gt; print(detail));
        print(&quot;\n&quot;); // Linha em branco
      }
      
    } catch (e) {
      printError(&quot;Erro ao obter status do cluster: &quot; + e.message + &quot;\n&quot;);
    }
    
    print(&quot;🔗 TESTE DE CONECTIVIDADE:&quot;);
    print(&quot;=&quot; + &quot;=&quot;.repeat(70) + &quot;\n&quot;);
    CONFIG.primaryPorts.forEach(port =&gt; {
      try {
        const testSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
        const result = testSession.runSql(&quot;SELECT @@hostname, @@port, @@server_id&quot;);
        const row = result.fetchOne();
        printSuccess(&quot;Porta &quot; + port + &quot;: Conectividade OK - Server ID: &quot; + row&#x5B;2]);
        testSession.close();
      } catch (e) {
        printError(&quot;Porta &quot; + port + &quot;: Erro de conectividade - &quot; + e.message);
      }
    });
    print(&quot;\n&quot;); // Linha em branco
    
    print(&quot;🔗 TESTE DE CONECTIVIDADE DAS RÉPLICAS:&quot;);
    print(&quot;=&quot; + &quot;=&quot;.repeat(70) + &quot;\n&quot;);
    CONFIG.replicaPorts.forEach(port =&gt; {
      try {
        const testSession = mysql.getSession(&quot;root:&quot; + CONFIG.password + &quot;@localhost:&quot; + port);
        const result = testSession.runSql(&quot;SELECT @@hostname, @@port, @@server_id&quot;);
        const row = result.fetchOne();
        printSuccess(&quot;Réplica &quot; + port + &quot;: Conectividade OK - Server ID: &quot; + row&#x5B;2]);
        testSession.close();
      } catch (e) {
        printWarning(&quot;Réplica &quot; + port + &quot;: Não disponível&quot;);
      }
    });
    print(&quot;\n&quot;); // Linha em branco
    
  } catch (e) {
    printWarning(&quot;Erro na verificação final: &quot; + e.message + &quot;\n&quot;);
  }
  
  // ==============================================
  // FINAL SUMMARY
  // ==============================================
  print(&quot;\n&quot; + &quot;=&quot;.repeat(80));
  print(&quot;🎉 CONFIGURAÇÃO CONCLUÍDA COM SUCESSO! 🎉&quot;);
  print(&quot;=&quot;.repeat(80) + &quot;\n&quot;);
  
  print(&quot;📋 RESUMO DA CONFIGURAÇÃO:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Cluster Name: &quot; + CONFIG.clusterName);
  print(&quot;• Instâncias Primárias: &quot; + CONFIG.primaryPorts.length + &quot; (&quot; + CONFIG.primaryPorts.join(', ') + &quot;)&quot;);
  print(&quot;• Réplicas de Leitura: &quot; + REPLICA_MAPPING.length + &quot; (&quot; + REPLICA_MAPPING.map(r =&gt; r.port).join(', ') + &quot;)&quot;);
  print(&quot;• Total de Instâncias: &quot; + CONFIG.ports.length);
  print(&quot;• Arquitetura: 4-Node Cluster + 4 Read Replicas (1:1)&quot;);
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;🔗 MAPEAMENTO DE RÉPLICAS:&quot;);
  print(&quot;-&quot;.repeat(70));
  REPLICA_MAPPING.forEach(replica =&gt; {
    print(&quot;• Nó &quot; + replica.source + &quot; → Réplica &quot; + replica.port + &quot; (&quot; + replica.label + &quot;)&quot;);
  });
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;⚖️  PESOS CONFIGURADOS:&quot;);
  print(&quot;-&quot;.repeat(70));
  Object.entries(CONFIG.weights).forEach((&#x5B;port, weight]) =&gt; {
    print(&quot;• Porta &quot; + port + &quot;: Peso &quot; + weight);
  });
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;🚀 PRÓXIMOS PASSOS:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Configurar MySQL Router para balanceamento de carga&quot;);
  print(&quot;• Implementar monitoramento e alertas&quot;);
  print(&quot;• Configurar backups automatizados&quot;);
  print(&quot;• Testar failover e recuperação&quot;);
  print(&quot;• Ajustar configurações de performance conforme necessário&quot;);
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;💡 COMANDOS ÚTEIS:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Status do cluster: cluster.status({extended: true})&quot;);
  print(&quot;• Conectar ao cluster: shell.connect('root@localhost:3307')&quot;);
  print(&quot;• Obter cluster: dba.getCluster('&quot; + CONFIG.clusterName + &quot;')&quot;);
  print(&quot;• Rescan do cluster: cluster.rescan()&quot;);
  print(&quot;• Verificar réplicas: cluster.listRouters()&quot;);
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;📋 COMANDOS PARA MONITORAMENTO (macOS/Linux):&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;# Monitorar log em tempo real:&quot;);
  print(&quot;tail -f /tmp/cluster_setup.log&quot;);
  print(&quot;&quot;);
  print(&quot;# Verificar portas em uso:&quot;);
  print(&quot;lsof -i -P | grep LISTEN | grep :33&quot;);
  print(&quot;&quot;);
  print(&quot;# Verificar processos MySQL:&quot;);
  print(&quot;ps aux | grep mysql&quot;);
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;=&quot;.repeat(80));
  printSuccess(&quot;✨ Script executado com sucesso! ✨&quot;);
  print(&quot;=&quot;.repeat(80) + &quot;\n&quot;);

} catch (mainErr) {
  // ==============================================
  // EMERGENCY ERROR HANDLING
  // ==============================================
  print(&quot;\n&quot; + &quot;=&quot;.repeat(80));
  print(&quot;🚨 ERRO CRÍTICO DETECTADO - INICIANDO LIMPEZA DE EMERGÊNCIA 🚨&quot;);
  print(&quot;=&quot;.repeat(80) + &quot;\n&quot;);
  
  printError(&quot;ERRO PRINCIPAL: &quot; + mainErr.message);
  printError(&quot;STACK TRACE: &quot; + (mainErr.stack || 'N/A') + &quot;\n&quot;);
  
  printInfo(&quot;Executando limpeza de emergência...\n&quot;);
  
  try {
    try {
      const emergencyCluster = dba.getCluster();
      if (emergencyCluster) {
        emergencyCluster.dissolve({ force: true });
        printInfo(&quot;Cluster dissolvido durante limpeza de emergência\n&quot;);
      }
    } catch (e) {
      printWarning(&quot;Erro ao dissolver cluster: &quot; + e.message + &quot;\n&quot;);
    }
    
    printInfo(&quot;Removendo todas as instâncias sandbox...&quot;);
    CONFIG.ports.forEach(port =&gt; {
      safeKillSandbox(port);
      safeDeleteSandbox(port);
    });
    print(&quot;\n&quot;); // Linha em branco
    
    safeCleanDirectories();
    
    printSuccess(&quot;Limpeza de emergência concluída\n&quot;);
    
  } catch (emergencyErr) {
    printError(&quot;Erro durante limpeza de emergência: &quot; + emergencyErr.message + &quot;\n&quot;);
  }
  
  print(&quot;💡 SUGESTÕES PARA RESOLUÇÃO:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;• Verifique se as portas estão disponíveis: lsof -i :330&quot;);
  print(&quot;• Confirme se o MySQL Shell tem permissões adequadas&quot;);
  print(&quot;• Verifique a conectividade de rede&quot;);
  print(&quot;• Analise os logs do MySQL para erros específicos&quot;);
  print(&quot;• Execute o script novamente após corrigir os problemas&quot;);
  print(&quot;• Verifique se há processos MySQL em execução: ps aux | grep mysql&quot;);
  print(&quot;• Limpe manualmente o diretório: rm -rf &quot; + CONFIG.sandboxPath);
  print(&quot;\n&quot;); // Linha em branco
  
  print(&quot;🔧 COMANDOS DE LIMPEZA MANUAL:&quot;);
  print(&quot;-&quot;.repeat(70));
  print(&quot;# Parar e remover todas as instâncias:&quot;);
  print(&quot;for port in 3307 3310 3320 3330 3340 3350 3360 3370; do&quot;);
  print(&quot;  mysqlsh --js -e \&quot;try{dba.killSandboxInstance($port)}catch(e){}\&quot;&quot;);
  print(&quot;  mysqlsh --js -e \&quot;try{dba.deleteSandboxInstance($port)}catch(e){}\&quot;&quot;);
  print(&quot;done&quot;);
  print(&quot;&quot;);
  print(&quot;# Limpar diretório de sandboxes:&quot;);
  print(&quot;rm -rf ~/mysql-sandboxes\n&quot;);
  
  throw mainErr;
}
</pre></div>

<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
┌&#x5B;acaciolr☮MacBook-Pro-de-Acacio.local]-(~/Library/Mobile Documents/com~apple~CloudDocs/DBA/DBA Scripts/MySQL)
└&gt; mysqlsh --file mysql_innodb_cluster_macOS_mb.js --log-level=8 --log-file=/tmp/cluster.log

================================================================================PHASE 0: LIMPEZA COMPLETA DO AMBIENTE================================================================================
ℹ️  Verificando cluster existente...⚠️  Nenhum cluster ativo encontrado: Iniciando nova configuração
ℹ️  Removendo todas as instâncias sandbox...
Killing MySQL instance...

Instance localhost:3307 successfully killed.

ℹ️  Instância 3307 encerrada
Killing MySQL instance...

Instance localhost:3310 successfully killed.

ℹ️  Instância 3310 encerrada
Killing MySQL instance...

Instance localhost:3320 successfully killed.

ℹ️  Instância 3320 encerrada
Killing MySQL instance...

Instance localhost:3330 successfully killed.

ℹ️  Instância 3330 encerrada
Killing MySQL instance...

Instance localhost:3340 successfully killed.

ℹ️  Instância 3340 encerrada
Killing MySQL instance...

Killing MySQL instance...

Killing MySQL instance...

Deleting MySQL instance...

Instance localhost:3307 successfully deleted.

ℹ️  Instância 3307 removida
Deleting MySQL instance...

Instance localhost:3310 successfully deleted.

ℹ️  Instância 3310 removida
Deleting MySQL instance...

Instance localhost:3320 successfully deleted.

ℹ️  Instância 3320 removida
Deleting MySQL instance...

Instance localhost:3330 successfully deleted.

ℹ️  Instância 3330 removida
Deleting MySQL instance...

Instance localhost:3340 successfully deleted.

ℹ️  Instância 3340 removida
Deleting MySQL instance...

Deleting MySQL instance...

Deleting MySQL instance...
ℹ️  Comando de limpeza preparado: rm -rf /Users/acaciolr/mysql-sandboxes✅ Preparação de limpeza de diretórios concluída⏳ Aguardando 10 segundos...
✅ LIMPEZA CONCLUÍDA

================================================================================PHASE 1: CRIAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================
ℹ️  Criando instância primária 3307...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3307

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3307 successfully deployed and started.
Use shell.connect('root@localhost:3307') to connect to the instance.

✅ Instância primária 3307 criada e pronta (1/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3310...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3310

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3310 successfully deployed and started.
Use shell.connect('root@localhost:3310') to connect to the instance.

✅ Instância primária 3310 criada e pronta (2/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3320...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3320

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3320 successfully deployed and started.
Use shell.connect('root@localhost:3320') to connect to the instance.

✅ Instância primária 3320 criada e pronta (3/4)
⏳ Aguardando 2 segundos...
ℹ️  Criando instância primária 3330...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3330

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3330 successfully deployed and started.
Use shell.connect('root@localhost:3330') to connect to the instance.

✅ Instância primária 3330 criada e pronta (4/4)
⏳ Aguardando 2 segundos...

================================================================================PHASE 2: CONFIGURAÇÃO DAS INSTÂNCIAS PRIMÁRIAS================================================================================
ℹ️  Configurando instância 3307 para clustering...Configuring local MySQL instance listening at port 3307 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3307
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3307' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3307 configurada (1/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3310 para clustering...Configuring local MySQL instance listening at port 3310 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3310' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3310 configurada (2/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3320 para clustering...Configuring local MySQL instance listening at port 3320 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3320' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3320 configurada (3/4)
⏳ Aguardando 1 segundos...
ℹ️  Configurando instância 3330 para clustering...Configuring local MySQL instance listening at port 3330 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3330
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3330' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
✅ Instância 3330 configurada (4/4)
⏳ Aguardando 1 segundos...

================================================================================PHASE 3: CRIAÇÃO DO CLUSTER INNODB================================================================================
ℹ️  Conectando à instância primária (3307)...✅ Conectado à instância primária
ℹ️  Verificando se cluster 'my-cluster-db-v5' já existe...ERROR: Command not available on an unmanaged standalone instance.
ℹ️  Criando novo cluster 'my-cluster-db-v5'...A new InnoDB Cluster will be created on instance '127.0.0.1:3307'.

Validating instance configuration at localhost:3307...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3307

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3307'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

Creating InnoDB Cluster 'my-cluster-db-v5' on '127.0.0.1:3307'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

✅ Cluster 'my-cluster-db-v5' criado com sucesso
ℹ️  Aguardando estabilização do cluster primário...
⏳ Aguardando 30 segundos...
ℹ️  Status do cluster: OK_NO_TOLERANCE✅ Cluster primário está funcionando corretamente

================================================================================PHASE 4: ADIÇÃO DAS INSTÂNCIAS SECUNDÁRIAS AO CLUSTER================================================================================
ℹ️  Adicionando instância 3310 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3310...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3310

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3310'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3310 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3310 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3310 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

State recovery already finished for '127.0.0.1:3310'

The instance '127.0.0.1:3310' was successfully added to the cluster.

✅ Instância 3310 adicionada ao cluster (1/3)
ℹ️  Aguardando sincronização da instância 3310...
⏳ Aguardando 20 segundos...
✅ Instância 3310 está ONLINE no cluster
ℹ️  Adicionando instância 3320 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3320...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3320

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3320'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3320 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3320 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.82 MB transferred in about 1 second (~73.82 MB/s)

State recovery already finished for '127.0.0.1:3320'

The instance '127.0.0.1:3320' was successfully added to the cluster.

✅ Instância 3320 adicionada ao cluster (2/3)
ℹ️  Aguardando sincronização da instância 3320...
⏳ Aguardando 20 segundos...
✅ Instância 3320 está ONLINE no cluster
ℹ️  Adicionando instância 3330 ao cluster (tentativa 1/3)...

Clone based recovery selected through the recoveryMethod option

Validating instance configuration at localhost:3330...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3330

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:3330'. Use the localAddress option to override.

* Checking connectivity and SSL configuration...

A new instance will be added to the InnoDB Cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3330 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3330 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3330 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

State recovery already finished for '127.0.0.1:3330'

The instance '127.0.0.1:3330' was successfully added to the cluster.

✅ Instância 3330 adicionada ao cluster (3/3)
ℹ️  Aguardando sincronização da instância 3330...
⏳ Aguardando 20 segundos...
✅ Instância 3330 está ONLINE no cluster
ℹ️  Total de instâncias secundárias adicionadas: 3/3ℹ️  Aguardando sincronização completa do cluster...
⏳ Aguardando 15 segundos...
ℹ️  Verificando status do cluster após adição de instâncias...ℹ️  Total de nós no cluster: 4

================================================================================PHASE 5: CONFIGURAÇÃO DE PESOS DAS INSTÂNCIAS================================================================================
Setting the value of 'memberWeight' to '100' in the instance: '127.0.0.1:3307' ...

Successfully set the value of 'memberWeight' to '100' in the cluster member: '127.0.0.1:3307'.
✅ Peso 100 configurado para instância 3307Setting the value of 'memberWeight' to '60' in the instance: '127.0.0.1:3310' ...

Successfully set the value of 'memberWeight' to '60' in the cluster member: '127.0.0.1:3310'.
✅ Peso 60 configurado para instância 3310Setting the value of 'memberWeight' to '40' in the instance: '127.0.0.1:3320' ...

Successfully set the value of 'memberWeight' to '40' in the cluster member: '127.0.0.1:3320'.
✅ Peso 40 configurado para instância 3320Setting the value of 'memberWeight' to '20' in the instance: '127.0.0.1:3330' ...

Successfully set the value of 'memberWeight' to '20' in the cluster member: '127.0.0.1:3330'.
✅ Peso 20 configurado para instância 3330✅ Configuração de pesos concluída

================================================================================PHASE 5.5: CRIAÇÃO DE USUÁRIOS DE REPLICAÇÃO================================================================================
ℹ️  Criando usuário de replicação na instância primária (3307)...✅ Usuário de replicação criado com sucesso na instância primária
ℹ️  Aguardando propagação do usuário para os nós secundários...
⏳ Aguardando 5 segundos...
✅ Usuário de replicação confirmado no nó 3310✅ Usuário de replicação confirmado no nó 3320✅ Usuário de replicação confirmado no nó 3330
================================================================================PHASE 6: CONFIGURAÇÃO DAS RÉPLICAS DE LEITURA================================================================================
ℹ️  Verificando nós disponíveis no cluster para réplicas...
ℹ️  Processando réplica 3340 para fonte 3307...
ℹ️  Nó fonte 3307 está ONLINE, criando réplica 3340...
ℹ️  - Criando instância réplica 3340...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3340

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3340 successfully deployed and started.
Use shell.connect('root@localhost:3340') to connect to the instance.

ℹ️  - Configurando instância réplica 3340...Configuring local MySQL instance listening at port 3340 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3340
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3340' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3340 como réplica de leitura anexada ao nó 3307...Setting up '127.0.0.1:3340' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3340...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3340

Instance configuration is suitable.
* Checking transaction state of the instance...


Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3340 is being cloned from 127.0.0.1:3307
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 73.84 MB transferred in about 1 second (~73.84 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3340 to 127.0.0.1:3307

* Waiting for Read-Replica '127.0.0.1:3340' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3340' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3340 configurada e anexada ao nó 3307 (1/4)
⏳ Aguardando 10 segundos...
ℹ️  Processando réplica 3350 para fonte 3310...
ℹ️  Nó fonte 3310 está ONLINE, criando réplica 3350...
ℹ️  - Criando instância réplica 3350...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3350

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3350 successfully deployed and started.
Use shell.connect('root@localhost:3350') to connect to the instance.

ℹ️  - Configurando instância réplica 3350...Configuring local MySQL instance listening at port 3350 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3350
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3350' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3310...✅ Super-read-only desabilitado temporariamente no nó 3310⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3350 como réplica de leitura anexada ao nó 3310...Setting up '127.0.0.1:3350' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3350...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3350

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3350' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3350' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3350 is being cloned from 127.0.0.1:3310
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3350 to 127.0.0.1:3310

* Waiting for Read-Replica '127.0.0.1:3350' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3350' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3350 configurada e anexada ao nó 3310 (2/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3310...✅ Super-read-only reabilitado no nó 3310
ℹ️  Processando réplica 3360 para fonte 3320...
ℹ️  Nó fonte 3320 está ONLINE, criando réplica 3360...
ℹ️  - Criando instância réplica 3360...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3360

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3360 successfully deployed and started.
Use shell.connect('root@localhost:3360') to connect to the instance.

ℹ️  - Configurando instância réplica 3360...Configuring local MySQL instance listening at port 3360 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3360
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3360' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3320...✅ Super-read-only desabilitado temporariamente no nó 3320⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3360 como réplica de leitura anexada ao nó 3320...Setting up '127.0.0.1:3360' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3360...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3360

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3360' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3360' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3360 is being cloned from 127.0.0.1:3320
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.87 MB transferred in about 1 second (~74.87 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3360 to 127.0.0.1:3320

* Waiting for Read-Replica '127.0.0.1:3360' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3360' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3360 configurada e anexada ao nó 3320 (3/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3320...✅ Super-read-only reabilitado no nó 3320
ℹ️  Processando réplica 3370 para fonte 3330...
ℹ️  Nó fonte 3330 está ONLINE, criando réplica 3370...
ℹ️  - Criando instância réplica 3370...A new MySQL sandbox instance will be created on this host in
/Users/acaciolr/mysql-sandboxes/3370

Warning: Sandbox instances are only suitable for deploying and
running on your local machine for testing purposes and are not
accessible from external networks.


Deploying new MySQL instance...

Instance localhost:3370 successfully deployed and started.
Use shell.connect('root@localhost:3370') to connect to the instance.

ℹ️  - Configurando instância réplica 3370...Configuring local MySQL instance listening at port 3370 for use in an InnoDB Cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3370
Assuming full account name 'root'@'%' for root
User 'root'@'%' already exists and will not be created.

applierWorkerThreads will be set to the default value of 4.

The instance '127.0.0.1:3370' is valid for InnoDB Cluster usage.

Successfully enabled parallel appliers.
ℹ️  - Desabilitando temporariamente super-read-only no nó 3330...✅ Super-read-only desabilitado temporariamente no nó 3330⏳ Aguardando 3 segundos...
ℹ️  - Adicionando 3370 como réplica de leitura anexada ao nó 3330...Setting up '127.0.0.1:3370' as a Read Replica of Cluster 'my-cluster-db-v5'.

Validating instance configuration at localhost:3370...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.

This instance reports its own address as 127.0.0.1:3370

Instance configuration is suitable.
* Checking transaction state of the instance...
NOTE: A GTID set check of the MySQL instance at '127.0.0.1:3370' determined that it is missing transactions that were purged from all cluster members.
NOTE: The target instance '127.0.0.1:3370' has not been pre-provisioned (GTID set is empty). The Shell is unable to determine whether the instance has pre-existing data that would be overwritten with clone based recovery.

Clone based recovery selected through the recoveryMethod option

* Checking connectivity and SSL configuration...

* Waiting for the donor to synchronize with PRIMARY...
** Transactions replicated  ############################################################  100%



Monitoring Clone based state recovery of the new member. Press ^C to abort the operation.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 127.0.0.1:3370 is being cloned from 127.0.0.1:3330
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
* Clone process has finished: 74.89 MB transferred in about 1 second (~74.89 MB/s)

* Configuring Read-Replica managed replication channel...
** Changing replication source of 127.0.0.1:3370 to 127.0.0.1:3330

* Waiting for Read-Replica '127.0.0.1:3370' to synchronize with Cluster...
** Transactions replicated  ############################################################  100%



'127.0.0.1:3370' successfully added as a Read-Replica of Cluster 'my-cluster-db-v5'.

✅ Réplica 3370 configurada e anexada ao nó 3330 (4/4)
⏳ Aguardando 10 segundos...
ℹ️  - Reabilitando super-read-only no nó 3330...✅ Super-read-only reabilitado no nó 3330
ℹ️  Verificando configuração final de super-read-only...
ℹ️  Super-read-only já está habilitado no nó 3310ℹ️  Super-read-only já está habilitado no nó 3320ℹ️  Super-read-only já está habilitado no nó 3330
================================================================================PHASE 7: VERIFICAÇÃO FINAL E STATUS================================================================================
ℹ️  Aguardando estabilização final...
⏳ Aguardando 15 segundos...

📊 STATUS COMPLETO DO CLUSTER:=======================================================================
{
  &quot;clusterName&quot;: &quot;my-cluster-db-v5&quot;,
  &quot;defaultReplicaSet&quot;: {
    &quot;GRProtocolVersion&quot;: &quot;8.0.27&quot;,
    &quot;communicationStack&quot;: &quot;MYSQL&quot;,
    &quot;groupName&quot;: &quot;650a7be8-9275-11f0-8693-735b6f1b3cd9&quot;,
    &quot;groupViewChangeUuid&quot;: &quot;AUTOMATIC&quot;,
    &quot;groupViewId&quot;: &quot;17579693355065660:10&quot;,
    &quot;name&quot;: &quot;default&quot;,
    &quot;paxosSingleLeader&quot;: &quot;OFF&quot;,
    &quot;primary&quot;: &quot;127.0.0.1:3307&quot;,
    &quot;ssl&quot;: &quot;REQUIRED&quot;,
    &quot;status&quot;: &quot;OK&quot;,
    &quot;statusText&quot;: &quot;Cluster is ONLINE and can tolerate up to ONE failure.&quot;,
    &quot;topology&quot;: {
      &quot;127.0.0.1:3307&quot;: {
        &quot;address&quot;: &quot;127.0.0.1:3307&quot;,
        &quot;applierWorkerThreads&quot;: 4,
        &quot;fenceSysVars&quot;: &#x5B;],
        &quot;memberId&quot;: &quot;499c9602-9275-11f0-b1ea-d038aaac61de&quot;,
        &quot;memberRole&quot;: &quot;PRIMARY&quot;,
        &quot;memberState&quot;: &quot;ONLINE&quot;,
        &quot;mode&quot;: &quot;R/W&quot;,
        &quot;readReplicas&quot;: {
          &quot;Replica_Primary_3307&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3340&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;127.0.0.1:3307&quot;
            ],
            &quot;replicationSsl&quot;: &quot;TLS_AES_128_GCM_SHA256 TLSv1.3&quot;,
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;8.4.3&quot;
          }
        },
        &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.4.3&quot;
      },
      &quot;127.0.0.1:3310&quot;: {
        &quot;address&quot;: &quot;127.0.0.1:3310&quot;,
        &quot;applierWorkerThreads&quot;: 4,
        &quot;fenceSysVars&quot;: &#x5B;
          &quot;read_only&quot;,
          &quot;super_read_only&quot;
        ],
        &quot;memberId&quot;: &quot;51138f80-9275-11f0-b352-d6511470f888&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;memberState&quot;: &quot;ONLINE&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {
          &quot;Replica_Secondary_3310&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3350&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;127.0.0.1:3310&quot;
            ],
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;8.4.3&quot;
          }
        },
        &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.4.3&quot;
      },
      &quot;127.0.0.1:3320&quot;: {
        &quot;address&quot;: &quot;127.0.0.1:3320&quot;,
        &quot;applierWorkerThreads&quot;: 4,
        &quot;fenceSysVars&quot;: &#x5B;
          &quot;read_only&quot;,
          &quot;super_read_only&quot;
        ],
        &quot;memberId&quot;: &quot;5749c8e2-9275-11f0-a8ca-524150cc11ed&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;memberState&quot;: &quot;ONLINE&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {
          &quot;Replica_Tertiary_3320&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3360&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;127.0.0.1:3320&quot;
            ],
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;8.4.3&quot;
          }
        },
        &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.4.3&quot;
      },
      &quot;127.0.0.1:3330&quot;: {
        &quot;address&quot;: &quot;127.0.0.1:3330&quot;,
        &quot;applierWorkerThreads&quot;: 4,
        &quot;fenceSysVars&quot;: &#x5B;
          &quot;read_only&quot;,
          &quot;super_read_only&quot;
        ],
        &quot;memberId&quot;: &quot;5daecdfe-9275-11f0-8a3f-de7098aa8dc1&quot;,
        &quot;memberRole&quot;: &quot;SECONDARY&quot;,
        &quot;memberState&quot;: &quot;ONLINE&quot;,
        &quot;mode&quot;: &quot;R/O&quot;,
        &quot;readReplicas&quot;: {
          &quot;Replica_Quaternary_3330&quot;: {
            &quot;address&quot;: &quot;127.0.0.1:3370&quot;,
            &quot;applierStatus&quot;: &quot;APPLIED_ALL&quot;,
            &quot;applierThreadState&quot;: &quot;Waiting for an event from Coordinator&quot;,
            &quot;applierWorkerThreads&quot;: 4,
            &quot;receiverStatus&quot;: &quot;ON&quot;,
            &quot;receiverThreadState&quot;: &quot;Waiting for source to send event&quot;,
            &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
            &quot;replicationSources&quot;: &#x5B;
              &quot;127.0.0.1:3330&quot;
            ],
            &quot;role&quot;: &quot;READ_REPLICA&quot;,
            &quot;status&quot;: &quot;ONLINE&quot;,
            &quot;version&quot;: &quot;8.4.3&quot;
          }
        },
        &quot;replicationLag&quot;: &quot;applier_queue_applied&quot;,
        &quot;role&quot;: &quot;HA&quot;,
        &quot;status&quot;: &quot;ONLINE&quot;,
        &quot;version&quot;: &quot;8.4.3&quot;
      }
    },
    &quot;topologyMode&quot;: &quot;Single-Primary&quot;
  },
  &quot;groupInformationSourceMember&quot;: &quot;127.0.0.1:3307&quot;,
  &quot;metadataVersion&quot;: &quot;2.3.0&quot;
}
🎯 ANÁLISE DO STATUS:• Status Geral: OK• Modo: Single-Primary• SSL Mode: REQUIRED
📊 RESUMO POR STATUS:• ONLINE: 4 instância(s)
📈 ESTATÍSTICAS DO CLUSTER:• Nós ONLINE no cluster: 4• Total de réplicas de leitura: 4• Tolerância a falhas: SIM
📚 RÉPLICAS DE LEITURA ANEXADAS:  • 127.0.0.1:3307 → Replica_Primary_3307 (ONLINE)  • 127.0.0.1:3310 → Replica_Secondary_3310 (ONLINE)  • 127.0.0.1:3320 → Replica_Tertiary_3320 (ONLINE)  • 127.0.0.1:3330 → Replica_Quaternary_3330 (ONLINE)
🔗 TESTE DE CONECTIVIDADE:=======================================================================
✅ Porta 3307: Conectividade OK - Server ID: 4288433930✅ Porta 3310: Conectividade OK - Server ID: 646259890✅ Porta 3320: Conectividade OK - Server ID: 3535963276✅ Porta 3330: Conectividade OK - Server ID: 1379475883
🔗 TESTE DE CONECTIVIDADE DAS RÉPLICAS:=======================================================================
✅ Réplica 3340: Conectividade OK - Server ID: 1064444721✅ Réplica 3350: Conectividade OK - Server ID: 2777852472✅ Réplica 3360: Conectividade OK - Server ID: 304947572✅ Réplica 3370: Conectividade OK - Server ID: 2936623678

================================================================================🎉 CONFIGURAÇÃO CONCLUÍDA COM SUCESSO! 🎉================================================================================
📋 RESUMO DA CONFIGURAÇÃO:----------------------------------------------------------------------• Cluster Name: my-cluster-db-v5• Instâncias Primárias: 4 (3307, 3310, 3320, 3330)• Réplicas de Leitura: 4 (3340, 3350, 3360, 3370)• Total de Instâncias: 8• Arquitetura: 4-Node Cluster + 4 Read Replicas (1:1)
🔗 MAPEAMENTO DE RÉPLICAS:----------------------------------------------------------------------• Nó 3307 → Réplica 3340 (Replica_Primary_3307)• Nó 3310 → Réplica 3350 (Replica_Secondary_3310)• Nó 3320 → Réplica 3360 (Replica_Tertiary_3320)• Nó 3330 → Réplica 3370 (Replica_Quaternary_3330)
⚖️  PESOS CONFIGURADOS:----------------------------------------------------------------------• Porta 3307: Peso 100• Porta 3310: Peso 60• Porta 3320: Peso 40• Porta 3330: Peso 20
🚀 PRÓXIMOS PASSOS:----------------------------------------------------------------------• Configurar MySQL Router para balanceamento de carga• Implementar monitoramento e alertas• Configurar backups automatizados• Testar failover e recuperação• Ajustar configurações de performance conforme necessário
💡 COMANDOS ÚTEIS:----------------------------------------------------------------------• Status do cluster: cluster.status({extended: true})• Conectar ao cluster: shell.connect('root@localhost:3307')• Obter cluster: dba.getCluster('my-cluster-db-v5')• Rescan do cluster: cluster.rescan()• Verificar réplicas: cluster.listRouters()
📋 COMANDOS PARA MONITORAMENTO (macOS/Linux):----------------------------------------------------------------------# Monitorar log em tempo real:tail -f /tmp/cluster_setup.log# Verificar portas em uso:lsof -i -P | grep LISTEN | grep :33# Verificar processos MySQL:ps aux | grep mysql
================================================================================✅ ✨ Script executado com sucesso! ✨================================================================================
┌&#x5B;acaciolr☮MacBook-Pro-de-Acacio.local]-(~/Library/Mobile Documents/com~apple~CloudDocs/DBA/DBA Scripts/MySQL)
</pre></div>


<p>Eu resolvi compartilhar a ideia pra não deixa no /dev/null, mas existem melhorias a serem aplicadas, conforme for ajustando eu vou atualizando o script aqui, caso tenham melhorias ou sugestões podem publicar nos comentários ou me chamar no nas redes que podemos trocar ideias e bater figurinhas hahaha. </p>



<p>Abraço.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
