Wednesday, October 23, 2024

MYSQL::Setting Validate_Password componet for MySQL Database to ensure password policy settings

Inadequate Password Settings for MySQL Database

We observed that the `validate_password%` settings on hostname `<insert hostname>` are empty. This indicates that the **validate_password component** is either **not installed, not enabled, or not properly configured**. As a result, the required **password complexity and password history settings** are not enforced. 

 ---

 ### **References** 

 - [MySQL Validate Password Installation](https://dev.mysql.com/doc/refman/5.7/en/validate-password-installation.html) 

- [Fix MySQL Error 1819: Password Policy Requirements](https://ostechnix.com/fix-mysql-error-1819-hy000-your-password-does-not-satisfy-the-current-policy-requirements/) 

- [Percona Blog on MySQL Password Security](https://www.percona.com/blog/improving-mysql-password-security-with-validation-plugin/) 

### **Action Plan for Password Policy Implementation** 

 1. **Take backup of `my.ini`** before making changes on all relevant servers: 

   - **Servers:** `SLAVE1, SLAVE2, and .MASTER` 

 2. **Modify the `my.ini` file** on these servers with the following policy configuration: 

    ```ini

   # Password policy amended

   password_history=5 

   password_reuse_interval=365 

   validate_password.length=14 

   validate_password.policy=STRONG 

  ```

 3. **Install the `validate_password` component** **before restarting MySQL**: 

    On **slave servers first** and **master last**: 

    mysql> SET GLOBAL super_read_only = 0;  (this for SLAVE only)

   Query OK, 0 rows affected (0.00 sec)

    mysql> INSTALL COMPONENT 'file://component_validate_password'; 

   Query OK, 0 rows affected (0.04 sec)

    mysql> SET GLOBAL super_read_only = 1;  (this for SLAVE only)

   Query OK, 0 rows affected (0.00 sec)

   ```

4. **Restart MySQL services** sequentially: 

   - **Restart Sequence**: 

     1. **Slave 1 (`SLAVE2`)** 

     2. **Slave 2 (`SLAVE1`)** 

     3. **Master (`.MASTER`)**

 ---

 ### **Commands to Run at MySQL Prompt for Validation** 

 After restarting MySQL, run these commands to verify the configuration: 

 1. **Check the validate_password component installation**: 

    mysql> SHOW GLOBAL VARIABLES LIKE '%validate_password%'; 

   ```

    Example output: 

    +--------------------------------------+--------+

   | Variable_name                        | Value  |

   +--------------------------------------+--------+

   | validate_password.check_user_name    | ON     |

   | validate_password.dictionary_file    |        |

   | validate_password.length             | 14     |

   | validate_password.mixed_case_count   | 1      |

   | validate_password.number_count       | 1      |

   | validate_password.policy             | STRONG |

   | validate_password.special_char_count | 1      |

   +--------------------------------------+--------+

   7 rows in set (0.00 sec)

   ```

 2. **Check installed components**: 

      mysql> SELECT * FROM mysql.component; 

   +--------------+--------------------+------------------------------------+

   | component_id | component_group_id | component_urn                      |

   +--------------+--------------------+------------------------------------+

   |            1 |                  1 | file://component_validate_password |

   +--------------+--------------------+------------------------------------+

   1 row in set (0.00 sec)

   ```

3. **Verify replication status and policy enforcement**: 

      mysql> SHOW GLOBAL VARIABLES LIKE '%read%';

   mysql> SHOW REPLICA STATUS\G;

mysql>show global variables like ‘%password_validate%’;

   ```

---

 ### **Implementation Process & Timeline** 

Worked on slave SLAVE2 first,Worked on slave SLAVE1 first,and worked on master as lastly

Task

Date

Status

Remarks

SLAVE1 slave(20 mins)

SLAVE2 slave(time taken :15mins)

MASTERDB slave

Take backup of my.ini before task( SLAVE1, SLAVE2 and MASTERDB)

 

 

done

done

done

need to do the entry at my.ini( SLAVE1,SLAVE2  then MASTERDB)

# Password policy amended
password_history=5
password_reuse_interval=365
validate_password.length=14
validate_password.policy=STRONG

 

 

 

done

done

done

Run at MYSQL cmd prompt at slave first and then on master.
mysql> set global super_read_only =0 ;
Query OK, 0 rows affected (0.00 sec)

install component 'file://component_validate_password';

mysql> INSTALL COMPONENT 'file://component_validate_password';
Query OK, 0 rows affected (0.04 sec)

mysql> set global super_read_only =1 ;
Query OK, 0 rows affected (0.00 sec)

mysql> show global variables like '%read%';
mysql> show global variables like '%validate_password%';

 

 

 

done

done

done

Restart mysql.

 

 

 

done

done

done

SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE 'validate%';
select * from mysql.component;

 

 

 

done

done

done

Test one new user with low complex pwd and see able or not

 

 

 

NA

NA

done

show replica status

 

 

 

GREEN

GREEN

NA

 

 

 

 

 

 

 

 ---

 Note:

Even without a dictionary file, password validation can still work effectively in MySQL.

The validate_password.dictionary_file option is used to specify a file containing a list of common passwords. MySQL will check new passwords against this dictionary and reject any that match. While using a dictionary file can provide an extra layer of protection, it's not strictly necessary for password validation to function.

 ### **Conclusion** 

 All necessary tasks to implement and enforce the **password complexity policy** on the MYSQL database have been successfully completed. The **password history**, **reuse interval**, and **complexity requirements** are now in effect across all relevant servers, ensuring improved password security.


Tuesday, September 3, 2024

MYSQL::::Setting Up Selective tables from Master-Slave Replication in MySQL with GTID or NON GTID

Setting Up Selective Master-Slave Replication in MySQL with GTID

Introduction

This blog post will guide you through the process of setting up selective master-slave replication in MySQL, specifically when both the master and slave databases are using GTID (Global Transaction Identifier) replication. This technique allows you to replicate only specific tables from the master to the slave database, optimizing resource usage and improving performance.

Prerequisites:

  • A running MySQL master-slave replication setup with GTID enabled on both the master and other slave servers.
  • A list of tables that you want to replicate.

Steps:

  1. Backup Specific Tables:
    • Use mysqldump to export the selected tables from the production database:

mysqldump -u root -p --master-data --databases production_db --tables table1 table2 table3 ... table115 > dump.sql 

  1. Modify MySQL Configuration:
    • Open the MySQL configuration file (usually my.cnf on Linux or my.ini on Windows).
    • Under the [mysqld] section, add the following lines to specify the tables to replicate:
    • replicate-do-table=production_db.table1
    • replicate-do-table=production_db.table2
    • # Add more tables as needed
    • Ensure that gtid_mode is set to ON and enforce_gtid_consistency is set to ON in the slave server's configuration(my.ini file).
    • Save and restart the new SLAVE MySQL server.
  2. Restore the Backup:
    • Import the exported dump file into the new SLAVE database:

mysql -u root -p demo_db < dump.sql

  1. Configure Replication:
    • Set up the replication configuration on the slave database:

CHANGE MASTER TO

MASTER_HOST='production_host',

MASTER_PORT=3306,

MASTER_USER='repl_user',

MASTER_PASSWORD='repl_password',

MASTER_AUTO_POSITION=1,

MASTER_SSL=1;

START SLAVE;

SELECT SLEEP(10);

SHOW REPLICA STATUS\G;

    • Replace the placeholders with your actual production server details and replication credentials.

Additional Considerations:

  • If you encounter the error ERROR 3546 (HY000) at line 24: @@GLOBAL.GTID_PURGED cannot be changed: the added gtid set must not overlap with @@GLOBAL.GTID_EXECUTED, you can try resetting the GTID state on the slave server using the RESET MASTER command prior running import command in new slave database.  
  • Ensure that the replication user has the necessary privileges on both the master and slave servers.
  • Consider using replication filters to further control which transactions are replicated.
  • MASTER_DELAY=7200 in MySQL replication sets a delay of 7200 seconds (2 hours) between the master and slave servers. This means that the slave will intentionally lag behind the master by 2 hours.
  • If we specify replicate-do-table only in mysql config file, it will replicate only those tables in the list, and ignore the rest. Behavior: Only the tables listed in replicate-do-table will be replicated. All other tables will be ignored.
  • If we specify replicate-ignore-table only in mysql config file, it will ignore those tables in the list, and replicate the rest. Behavior: All tables will be replicated except those listed in replicate-ignore-table.
  • If NON GTID replication, Then Replication command will be " CHANGE MASTER TO MASTER_HOST='your_master_host', MASTER_PORT=3306, MASTER_USER='your_master_user', MASTER_PASSWORD='your_master_password', MASTER_LOG_FILE='', MASTER_LOG_POS=;" Master log file and master log pos can be fetched from opening the MYSQL exported dump file from source.
  • The SELECT @@GLOBAL.GTID_EXECUTED command in MySQL at new SLAVE database is used to retrieve the set of Global Transaction Identifiers (GTIDs) that have been executed on the current server. This information is particularly useful in GTID-based replication setups. 

By following these steps and addressing the potential issues mentioned, you can successfully set up selective master-slave replication in MySQL with GTID, optimizing resource usage and improving performance.

 

 

Wednesday, May 15, 2024

MYSQL:::Keeping Your MySQL Database Secure: A Guide to Encryption Key Rotation

 Keeping Your MySQL Database Secure: A Guide to Encryption Key Rotation:

>>>>>>Minimize Disruptions, Maximize Security<<<<<<

This guide walks you through rotating the encryption key for your MySQL production database, ensuring optimal security with minimal impact on users and applications.

Why Rotate Your Encryption Key?

Think of your encryption key as a digital lock on your database. Regularly changing this key, just like changing a physical lock, enhances security. Even if an old key falls into the wrong hands, your data remains protected.

Rotating Your Key with Minimal Fuss

  1. Pick the Perfect Time: Schedule the key rotation during a period with low database activity, like off-peak hours. This keeps disruptions to a minimum.
  2. Execute the Rotation Command: On your primary database server, run this simple command:

MYSQL

 

ALTER INSTANCE ROTATE INNODB MASTER KEY;

 

  1. Automatic Replication: This command seamlessly replicates the key rotation to all connected servers in your MySQL Replication setup.

How Often Should You Rotate?

We recommend rotating your encryption key annually for optimal security.

Important Considerations:

  • Conflicting Operations on Hold: During rotation, MySQL puts a temporary pause on certain operations like CREATE TABLE ... ENCRYPTION or ALTER TABLE ... ENCRYPTION to prevent conflicts.
  • Brief Locking for Consistency: A short lock might occur during rotation to ensure data consistency. This shouldn't significantly impact performance.
  • Time Investment: Even for large databases, the rotation typically takes less than 2 minutes.

The Benefits of Key Rotation:

  • Enhanced Security: Regular rotation strengthens your database's defense by mitigating the risk of a compromised key.
  • Minimal Downtime: By choosing the right time window, you can rotate the key with minimal disruption to user experience.

By following these steps and considerations, you can successfully rotate your MySQL encryption key, keeping your database environment both secure and performant.

Ready to put these tips into action? Share your thoughts and experiences with encryption key rotation in the comments below!

#MySQL #DatabaseSecurity #Encryption

 

Wednesday, March 6, 2024

MYSQL:::Seamless Data Archiving: Exporting, Importing, and Pruning MySQL Tables

 

Seamless Data Archiving: Exporting, Importing, and Pruning MySQL Tables

Introduction: In the dynamic landscape of database management, effective data archiving is a crucial component for maintaining optimal performance and storage efficiency. This blog post outlines a seamless process for exporting a specific table from a production database, importing it into an archival server, and responsibly pruning records from the production database.

Export Script - Safely Extracting Data: The export process begins with a reliable MySQL export script, ensuring the safe extraction of data from the production database. The script, executed with the mysqldump command, exports a designated table (TABLENAME) from the specified production database (DATABASE_NAME). Noteworthy parameters include --single-transaction for consistent snapshots, --max-allowed-packet to accommodate large datasets, and --where to filter records based on the creation date.

mysqldump -umysqlbackup -p -P3306 DATABASE_NAME TABLENAME --single-transaction --max-allowed-packet=2G --no-create-info --set-gtid-purged=OFF --where="created_on < '2022-01-01'" > F:\FOREXPORTIMPORT_PROD_TO_ARCHIVAL\tablenamelessthan01012022.sql

  Import Script - Migrating to the Archival Server: Once the export is completed, the data is seamlessly imported into an archival server using the mysql command. The import script specifies the archival database (DATABASE_NAME) and utilizes the previously exported SQL file.

mysql -umysqlbackup -p -P3306 DATABASE_NAME --max-allowed-packet=1G < F:\FOREXPORTIMPORT_PROD_TO_ARCHIVAL\tablenamelessthan01012022.sql


If required, use --force flag to overwrite the existing data if any exist.

 Validation and Verification: Before proceeding with any further actions, thorough validation and verification on the archival server are imperative. Confirm that the imported data aligns with expectations and meets the archival requirements. This step ensures the integrity of the data before any modifications on the production database.

Deleting Records from Production - A Step Towards Optimization: Once the archival server data is validated, the next step involves responsibly deleting records from the production database. The provided deletion script targets the specified table (TABLENAME) based on a conditional criterion involving the primary key (primaryKey_id) and a creation date constraint. 

In the production database, execute the following query:

SELECT primary_key_id FROM TABLENAME WHERE DATE(created_on) < '2022-01-01';

 Copy the results of the query to Notepad++ and perform the following find-and-replace steps after removal of appropriate keywords to get the respective scripts:

  1. Press CTRL+H.
  2. Enter ^ in the "Find what" field.
  3. Enter Delete FROM TABLENAME WHERE primaryKey_id = in the "Replace with" field.
  4. Choose ".Regular expression".
  5. Click "Replace ALL".

After completing these steps in Notepad++, you can run the delete scripts using a batch approach rather than executing all of them at once. This ensures a more controlled and manageable process.

DELETE FROM TABLENAME WHERE primaryKey_id = values AND DATE(created_on) < '2022-01-01';

 Collaboration with Application Team: It is crucial to collaborate closely with the application team before executing any deletion scripts. Ensuring concurrence and alignment with business requirements minimizes the risk of unintentional data loss and guarantees a smooth transition to an optimized production database.

Here are essential considerations both before and after executing delete operations in a production database as part of the archival process.

Before Deletion:

  1. Timing Matters: Schedule delete operations during off-business hours to minimize the impact on ongoing operations. This ensures that the deletion process does not interfere with critical business activities and guarantees a smoother transition.

After Deletion:

  1. Table Defragmentation and Rebuilding: Following the deletion of significant amounts of data, it's essential to reclaim the space and optimize table performance. Execute the following SQL command during off-business hours to defragment and rebuild tables:


  2. ALTER TABLE tablename ENGINE=InnoDB;

    1. This command helps reclaim the deleted space within the table, optimizing storage on the underlying drive.


    2. Monitor Binary Log Files: Keep a close eye on the growth of binary log files, especially in production and any relevant slave database servers. Monitoring and managing binary logs are crucial for maintaining database integrity and preventing potential issues related to file size and storage.

      Regularly purge old binary log files using the following MySQL commands:

  3. PURGE BINARY LOGS BEFORE 'yyyy-mm-dd hh:mm:ss';


  4. This helps manage the size of binary logs and prevents unnecessary disk space consumption.
    1. Rebuilding Tables with Partitions: If the deleted table has partitions, a specialized approach is required to rebuild the tables effectively. Identify the partition name using the following command:

    SHOW CREATE TABLE tablename;

    Once you have the partition name, use the following SQL command to rebuild the table:
  5. ALTER TABLE tablename REBUILD PARTITION partition_name;

    Conclusion: In this comprehensive guide, we've explored a structured approach to exporting, importing, and pruning MySQL tables for effective data archiving. By following these steps meticulously, database administrators can maintain a well-organized, efficient database environment while preserving data integrity throughout the process. This step is particularly crucial for maintaining optimal performance, especially when dealing with partitioned tables.

    Effectively managing data in a production database requires a meticulous approach to deletion processes. By considering the timing of deletions, implementing post-deletion optimization strategies, and monitoring critical aspects like binary logs and partitions, database administrators can ensure a seamless archival process without compromising data integrity or system performance. Adopting these best practices contributes to a well-maintained and optimized database environment.

 

MYSQL::Setting Validate_Password componet for MySQL Database to ensure password policy settings

Inadequate Password Settings for MySQL Database We observed that the `validate_password%` settings on hostname `<insert hostname>` a...