segunda-feira, 2 de junho de 2025

[ORACLE] Quick tip - Changing SYS password in Exadata Cloud at Customer with a DR environment

Hello, everyone
 
How are you doing?
 
Today, I have a quick tip for you who work with Exadata Cloud at Customer.
 
And yes, you can do this in the user interface, but in the command line it's much cooler :) 
 
 
First of all, what is dbaascli?
 
In a simple definition, it is a command line utility that can be used to interact with your databases in a simple and straightforward way in Exadata Cloud at Customer and Cloud Service environments.

You can perform operations such as patching, creating a database, Upgrading, Backup and Recovery, Status Monitoring, Changing user passwords, and more. 
 
Today, I'll show you how to change the SYS password in Exadata Cloud at Customer and in a DR environment running on Exadata Cloud Service.
 
So, lets get start!

dbaascli database changePassword --help
 
DBAAS CLI version 25.1.2.0.0
Executing command database changePassword --help
    database changePassword - change the password for a given user in the database.

      Usage: dbaascli database changePassword --dbname <value> [--user <value>]
        {
            [--prepareStandbyBlob <value> [--blobLocation <value>]]
            | [--standbyBlobFromPrimary <value>]
        }
        [--resume [--sessionID <value>]] [--executePrereqs] [--waitForCompletion <value>]

      Where:
        --dbname - Oracle database name.
        [--user - username whose password change is required. ]
        [--prepareStandbyBlob | --standbyBlobFromPrimary]
        [--prepareStandbyBlob - specifies true to generate a blob file containing the artifacts needed to perform the operation in a DG environment. ]
            [--blobLocation - custom directory location where the standby blob file will be generated in a DG environment.]
        [--standbyBlobFromPrimary - specifies the location of the standby blob file which is prepared from the primary database. This is required only for standby operations. ]
        [--resume - to resume the previous operation]
            [--sessionID - to resume a specific session id.]
        [--executePrereqs - run the prerequisite checks and report the results. ]
        [--waitForCompletion - specifies false to run the operation in background. Valid values : true|false.]
 
 
 
Now, for we change the SYS password is simple. Using the root user: 
 
dbaascli database changePassword --dbname mdtrj --user SYS --prepareStandbyBlob true --blobLocation /u02/app/oracle/product/12.1.0/dbhome_6/dbs
 
Log: 
DBAAS CLI version 25.1.1.0.0
Executing command database changePassword --dbname mdtrj --user SYS --prepareStandbyBlob true --blobLocation /u02/app/oracle/product/12.1.0/dbhome_6/dbs
Job id: 62a5866c-67d9-4da1-b164-d3dbc877ae0f
Session log: /var/opt/oracle/log/mdtrj/database/changePassword/dbaastools_2025-05-12_04-51-59-PM_255992.log
 
Enter new password:
Enter new password (reconfirmation):

 
Loading PILOT...
Session ID of the current execution is: 43672
Log file location: /var/opt/oracle/log/mdtrj/database/changePassword/pilot_2025-05-12_04-52-10-PM_259934
-----------------
Running Plugin_initialization job
Enter new Password *************
Completed Plugin_initialization job
-----------------
Running DB_change_password_prechecks job
Completed DB_change_password_prechecks job
-----------------
Running Extract_blob_in_standby job
Skipping. Job is detected as not applicable.
-----------------
Running Copy_blob_contents_in_standby job
Skipping. Job is detected as not applicable.
-----------------
Running Change_password job
Completed Change_password job
-----------------
Running Update_db_wallet job
Completed Update_db_wallet job
-----------------
Running Prepare_blob_for_standby_in_primary job
Completed Prepare_blob_for_standby_in_primary job
---------- PLUGIN NOTES ----------
Note: Change password operation has run on the primary database. In order to successfully complete the operation, the file /u02/app/oracle/product/12.1.0/dbhome_6/dbs/mdtrj_2025-05-12_04-52-10-PM_259934.tar needs to be copied to the standby database node and run the change password operation on all standby databases by specifying the copied file
---------- END OF PLUGIN NOTES ----------
dbaascli execution completed

 
Copy the TAR file created in /u02/app/oracle/product/12.1.0/dbhome_6/dbs to DR environment: 
 
And execute on DR: 
 
dbaascli database changePassword --dbName mdtrj --user SYS 
--standbyBlobFromPrimary /home/oracle/covek/mdtrj_2025-05-12_04-52-10-PM_259934.tar
 
Log: 
DBAAS CLI version 25.1.2.0.0
Executing command database changePassword --dbName mdtrj --user SYS --standbyBlobFromPrimary /home/oracle/covek/mdtrj_2025-05-12_04-52-10-PM_259934.tar
Job id: 67722d25-f4ff-4bfe-9a80-d20780099caa
Session log: /var/opt/oracle/log/mdtrj/database/changePassword/dbaastools_2025-05-12_05-26-42-PM_390865.log
 
Enter new password:
Enter new password (reconfirmation):

Loading PILOT...
Session ID of the current execution is: 41121
Log file location: /var/opt/oracle/log/mdtrj/database/changePassword/pilot_2025-05-12_05-26-53-PM_395770
-----------------
Running Plugin_initialization job
Enter new password ******
Completed Plugin_initialization job
-----------------
Running DB_change_password_prechecks job
Completed DB_change_password_prechecks job
-----------------
Running Extract_blob_in_standby job
Completed Extract_blob_in_standby job
-----------------
Running Copy_blob_contents_in_standby job
Completed Copy_blob_contents_in_standby job
-----------------
Running Change_password job
Skipping. Job is detected as not applicable.
-----------------
Running Update_db_wallet job
Completed Update_db_wallet job
-----------------
Running Prepare_blob_for_standby_in_primary job
Skipping. Job is detected as not applicable.

dbaascli execution completed


And that's it, simple as that.
 
I hope this helps you. 
 
Regards
Mario 

segunda-feira, 14 de abril de 2025

[Oracle] GoldenGate for Distributed Applications and Analytics (GG for DAA) & Iceberg replication

Hello everyone.
 
How are you doing?
 
I was talking to Alex Lima, Oracle GoldenGate Product Manager, today and he suggested I take a look at a new OGG fo DAA. 
 
You already know that GG isn't just about replicating data from Oracle databases to Oracle databases, right? And did you know that it's not just limited to transactional databases?

Now, did you know that you can replicate tables in Iceberg format using GG for DAA from version 23.7?

 
That's right, my little master, you can. But first, what is Iceberg?

In a nutshell, Apache Iceberg is an open source table format designed for large-scale analysis in data lakes. In other words, the Iceberg format is a high-performance table format for extremely large analytical tables, designed to provide scalable and efficient data management.

Iceberg brings the reliability and simplicity of SQL tables to GG for DAA, while enabling engines such as Spark, Trino, Flink, Presto, Hive and Impala to work securely with the same tables at the same time.

And how can I do that? By using GG for DAA Handlers.

GG for DAA Handlers are native source and destination connectors for message streaming data/delta lake, cloud warehouse and NoSQL database technologies. They provide low-impact capture and real-time data ingestion capabilities with high accuracy and data throughput.
 
The OGG for DAA can be configured to work with any of the formats supported by Iceberg:
  •  Parquet
  •  Avro
  •  ORC
The default file format for Iceberg data files is Parquet.
 
The following Iceberg catalogs are also supported:
  • Hadoop catalog
  •  Nessie Catalog
  •  AWS Glue Catalog
  •  Polaris Catalog
  •  REST Catalog
  •  JDBC Catalog

And the following operations are supported as well:
  • INSERT: Generates files for insert operations.
  • UPDATE: Generates data files and delete files for update operations.
  • DELETE: Generates delete files for delete operations.
  • TRUNCATE: Generates a delete file with a condition of always true to truncate the target table.
We can also work with Compressed Update Handling. Oracle GoldenGate trails can contain compressed or uncompressed update records. A compressed update record (Default) in the Oracle GoldenGate trail file contains values for the key columns and the modified columns. If we work in uncompressed format, we will have values for all columns.

Oracle GoldenGate Iceberg Replicat can also replicate GoldenGate trail records to Iceberg tables. The files can be written to local files, AWS S3, Google Cloud Storage (GCS) or Azure DataLake Storage (ADLS). 

Another very interesting point is the Delete and Merge-On-Read (MoR) file. Oracle GoldenGate generates Iceberg delete files for UPDATE and DELETE operations. To do this, the write.update.mode property of the Iceberg table is set to merge-on-read.

Iceberg supports two types of delete files:
  • Exclusions by equality: The excluded records are identified by the equality of the values in the columns specified in the exclusion file.
  •  Exclusions by position: The excluded records are identified by the position of the records in the Iceberg data file.
Currently, Oracle GoldenGate uses Iceberg Equality Deletes to delete records from the Iceberg table.

One point to watch out for is primary key updates with missing column values. This will cause files to be transferred to the Iceberg table before the transfer interval, potentially resulting in small data files and delete files for the primary key update operation. For workloads or tables with frequent primary key updates, it would be more interesting to generate trace files with uncompressed update records. In addition, we should set gg.validate.keyupdate=true for the trail generated from the Oracle source.

The configuration of the Iceberg replication properties is stored in the Replicat properties file. And we can make the settings below:
  •  Nessie Catalog
  •  AWS Glue Catalog
  •  Polaris Catalog
  •  REST Catalog
  •  JDBC Catalog
  •  Hadoop Catalog
So that's it, if you didn't know about this Oracle GG for DAA capability, now you do and you can start exploiting this functionality.

And if you want to know more details, you can check it out here and here.
 
I hope this has helped you.

See you.
Mario
 

quarta-feira, 5 de março de 2025

[ORACLE] Useful scripts for the day-to-day life of a DBA (Part 3) - System metrics

Hello everyone.
 
Hope you're doing well!

As I said here, I've created a repository on GITHUB to share some scripts that I like to make and that help me in my day-to-day life as a DBA.
 

Today's script is showmetrics.sql and is in the ENVIRONMENT directory.     
 
This script is used to check all system metric values of the long-duration and their details. If the standard deviation in the metrics is high, it marks the metric for verification.
 
It's the first version, so I'm sure the Advisor needs some 'tuning' 😆. The attempt here is to mark the metrics with a standard deviation above 50% for verification.




Remember that these are scripts that you can use and change at your own risk. They help me a lot, but they may not be as functional for you.

And more important, my script, my rules for advice, ok? 😆

If you think you have a better metric, and want to suggest it, I'd be happy to listen and analyze it.

And always remember:
  1. Some scripts are a compilation of several other scripts I've created.
  2. Some scripts are based on and adapted from existing scripts. Those that I found reference to the author I obviously identify in the script.
  3. So, if there's a script that you're the author of and I've adapted to my reality and it's not mentioned, please let me know and I'll be happy to reference it.
  4. Because they're my scripts, they have my logic and help me in my day-to-day work, it doesn't mean that they're the best or that they're always the most refined and concerned with good practices. They can help you, but that's all.
  5.  Feel free to make changes, but please keep the reference to the authors.
  6.  If you find any mistakes, please let me know.
Feel free to download here.

Regards
Mario Barduchi

segunda-feira, 17 de fevereiro de 2025

[ORACLE] Useful scripts for the day-to-day life of a DBA (Part 2) - Execution Plan

Hello all 
 
Hope you're doing well!

As I said here, I've created a repository on GITHUB to share some scripts that I like to make and that help me in my day-to-day life as a DBA.
 
 

Today's script is showplan.sql and is in the SQLPLAN directory.     
 
The script serves to help monitor and check active execution plans and try to create an advisor for sql plans with problems. 
 
It's the first version, so I'm sure the Advisor needs some 'tuning' 😆. The attempt here is to flag potentially problematic plans, just a starting point for further analysis.




You can use this script in combination with Carlos Sierra's coe_xfr_sql_profile.sql to define a better plan, for example.

Remember that these are scripts that you can use and change at your own risk. They help me a lot, but they may not be as functional for you.

My script, my rules for advice and tuning, ok?
 
 
 
And always remember:
  1. Some scripts are a compilation of several other scripts I've created.
  2. Some scripts are based on and adapted from existing scripts. Those that I found reference to the author I obviously identify in the script.
  3. So, if there's a script that you're the author of and I've adapted to my reality and it's not mentioned, please let me know and I'll be happy to reference it.
  4. Because they're my scripts, they have my logic and help me in my day-to-day work, it doesn't mean that they're the best or that they're always the most refined and concerned with good practices. They can help you, but that's all.
  5.  Feel free to make changes, but please keep the reference to the authors.
  6.  If you find any mistakes, please let me know.
Feel free to download and use.

Regards
Mario Barduchi

terça-feira, 4 de fevereiro de 2025

[ORACLE] Useful scripts for the day-to-day life of a DBA (Part 1) - RMAN & SGA

Hello all 
 
Hope you're doing well!

I've created repository at GITHUB and I'm going to start sharing some scripts that I like to make and that help me in my day-to-day life as a DBA.
 

But some considerations are important:
  1. Some scripts are a compilation of several other scripts I've created.
  2. Some scripts are based on and adapted from existing scripts. Those that I found reference to the author I obviously identify in the script.
  3. So, if there's a script that you're the author of and I've adapted to my reality and it's not mentioned, please let me know and I'll be happy to reference it.
  4. Because they're my scripts, they have my logic and help me in my day-to-day work, it doesn't mean that they're the best or that they're always the most refined and concerned with good practices. They can help you, but that's all.
  5.  Feel free to make changes, but please keep the reference to the authors.
  6.  If you find any mistakes, please let me know.
The first scripts are for monitoring RMAN and one that combines various Advisors' results to suggest SGA tuning.

Remember that these are scripts that you can use and change at your own risk. They help me a lot, but may not be as functional for you.
 
 
Feel free to download and use.

Regards
Mario Barduchi

quinta-feira, 30 de janeiro de 2025

[ORACLE] Create OCI DB System using the console.

Hello everyone!

I hope you're doing well.

Today, I going to show you how to create a new DB system in OCI.

Let's get started.

 
1) Select Oracle Database -> Oracle Base Database Service in the OCI console and click on "Create DB system".

 
 
2) Enter the compartment and DB system names.

 
3) Click on "Change Shape" and select your preferred shape.
 
 
4) Configure the storage. To do this, click on "Change Storage".
 

Select the storage management software, and configure the storage performance and size.


5) Configure the Oracle Database software edition and the DB system node count.

 
We have 4 database software edition options: 
  • Standard Edition
  • Enterprise Edition
  • Enterprise Edition High Performance
  • Enterprise Edition Extreme Performance.
 
6) Add the ssh key and select whether to use your own licences or the license included model.


 
7) Now, specify the VCN and subnet to want to use.
 
 
The private IP will be created automatically, if you don't configure it.
 
 
8) Click "Next".
 
 


9) Now let's configure the database information.

 
Click "Change database image".


10) Select the database image.
 

 

11) Configure the SYS and TDE passwords, as well as the backup service.


Finally, click on “Create DB system”.

After 90 minutes (average), you will have a DB system ready to use.

And that's it. I hope it helps you.
 
Regards
Mario Barduchi

quinta-feira, 9 de janeiro de 2025

[ORACLE] Alter database open resetlogs failed after restore. And the problem was my Redo logs.

Hello all
 
How are you doing?
 
I was creating a new test environment today and I tried to open a database that I had restored. 
 
Simple. Of course not.
 
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00349: failure obtaining block size for '+REDO/FOLHAHOM/ONLINELOG/group_4a.rdo'
ORA-15001: diskgroup "REDO" does not exist or is not mounted
ORA-15001: diskgroup "REDO" does not exist or is not mounted
ORA-15001: diskgroup "REDO" does not exist or is not mounted
 
I checked the redo log files. They were wrong and I have no idea why.

set pages 120 lines 10000;
SELECT 
    l.INST_ID, l.THREAD#, l.GROUP#, 
    substr(lf.MEMBER,1,75) as MEMBER, 
    l.SEQUENCE#, l.first_change#, 
    l.BYTES/(1024 * 1024) AS "Size MB", 
    l.MEMBERS, l.ARCHIVED, l.STATUS, 
    lf.IS_RECOVERY_DEST_FILE, l.CON_ID
FROM gv$log l
JOIN gv$logfile lf ON
    l.INST_ID = lf.INST_ID AND
    l.GROUP# = lf.GROUP#
ORDER BY 1,2,3;


Maybe I forgot to set log_file_name_convert. 
 
I don't know and It doesn't matter. I didn't want to do the whole process again.

I can't drop the log file group because the status “CLEARING_CURRENT”, of course.


Kids, don't try it at home? 😜

I'm just kidding, but this solution is a workaround and should only be used in an emergency.

And remember: This is a test environment. Production can and probably will be different, so use at your own risk.

Let's go to the party:
 
-- Thread 1
SQL> ALTER DATABASE RENAME FILE '+REDO/FOLHAHOM/ONLINELOG/group_6a.rdo' TO '+REDO1/FOLHAHOM/ONLINELOG/group_6a.rdo';
Database altered.

-- Thread 2
SQL> ALTER DATABASE RENAME FILE '+REDO/FOLHAHOM/ONLINELOG/group_11a.rdo' TO '+REDO1/FOLHAHOM/ONLINELOG/group_11a.rdo';
Database altered.

set pages 120 lines 10000;
SELECT 
    l.INST_ID, l.THREAD#, l.GROUP#, 
    substr(lf.MEMBER,1,75) as MEMBER, 
    l.SEQUENCE#, l.first_change#, 
    l.BYTES/(1024 * 1024) AS "Size MB", 
    l.MEMBERS, l.ARCHIVED, l.STATUS, 
    lf.IS_RECOVERY_DEST_FILE, l.CON_ID
FROM gv$log l
JOIN gv$logfile lf ON
    l.INST_ID = lf.INST_ID AND
    l.GROUP# = lf.GROUP#
ORDER BY 1,2,3;


 
Can I rename all the members of the log file?

Yes, but I preferred to recreate all the log file groups.
 
 
 
SQL>  alter database add logfile thread 1 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL> alter database add logfile thread 2 ('+REDO1','+REDO2') size 512M;
Database altered.
 
SQL>  alter database add logfile thread 1 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL>  alter database add logfile thread 1 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL>  alter database add logfile thread 1 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL>  alter database add logfile thread 1 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL> alter database add logfile thread 2 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL> alter database add logfile thread 2 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL> alter database add logfile thread 2 ('+REDO1','+REDO2') size 512M;
Database altered.

SQL> alter database add logfile thread 2 ('+REDO1','+REDO2') size 512M;
Database altered.

 
SQL> ALTER DATABASE DROP LOGFILE GROUP 4;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 5;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 7;
Database altered.

SQL>
ALTER DATABASE DROP LOGFILE GROUP 8;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 9;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 10;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 12;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 13;
Database altered.

SQL> alter system checkpoint;
System altered.

I checked that the old groups were inactive and dropped the last two groups.

set pages 120 lines 10000;
SELECT 
    l.INST_ID, l.THREAD#, l.GROUP#, 
    substr(lf.MEMBER,1,75) as MEMBER, 
    l.SEQUENCE#, l.first_change#, 
    l.BYTES/(1024 * 1024) AS "Size MB", 
    l.MEMBERS, l.ARCHIVED, l.STATUS, 
    lf.IS_RECOVERY_DEST_FILE, l.CON_ID
FROM gv$log l
JOIN gv$logfile lf ON
    l.INST_ID = lf.INST_ID AND
    l.GROUP# = lf.GROUP#
ORDER BY 1,2,3;


SQL> ALTER DATABASE DROP LOGFILE GROUP 6;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 11;
Database altered.
 
SQL> ALTER DATABASE ENABLE thread 2;
 
 
And that's it. I hope I've helped you.

Regards
Mario


 
 


Postagem em destaque

[ORACLE] Quick tip - Changing SYS password in Exadata Cloud at Customer with a DR environment

Hello, everyone   How are you doing?   Today, I have a quick tip for you who work with Exadata Cloud at Customer.   And yes, you can do this...