sexta-feira, 29 de setembro de 2023

[ORACLE] How do I install the Oracle Autonomous Health Framework (AHF)?

Hello everyone!!!

I hope you're doing well!

Today, I'll show you how to install the Oracle Autonomous Health Framework (AHF). 

I don't have an Oracle RAC installed here on my laptop, but I'll provide it. Today I'll show you a single-node environment.

I decided to write and learn a bit more about AHF after having the privilege of attending Sandesh Rao's live session at Oracle Cloudworld 2023 in Las Vegas.

First, what is Oracle Autonomous Health Framework?

In Oracle's words, it's a "Machine learning-based solution for maintaining database availability and performance".

In Mario's words, it's a "very cool tool that collects and groups your system health information and logs and uses Machine Learning to provide you insights and predictions so that you can act preventively, avoiding problems in your environment".



For more information, feel free to click.

So, let's get started!!!

But always remember:

  1. This step-by-step worked for me, but it may not work for you.
  2. It's a basic and limited environment. The real life will be different, for sure.
  3. This post is for study and testing as well, and has no concern for performance and security best practices.
1) Download the AHF zip file

In my post, I'll download version "AHF 23.7.0 for Linux". 

Calm down, calm down, buddy!!! I know we already have version 23.8 at the time of writing this post (Sep/23), but I'll show you how to upgrade the version on next post. 




2) Unzip the zip file

Important: I'll execute all installation with user ROOT.

[root@Mariao ~]# cd /u01/AHF
[root@Mariao ~]# unzip AHF-LINUX-v23.7.0.zip


3) Create the data directory.

[root@Mariao ~]# mkdir -p /AHF/Data


3) Run install.

In red, my changes and interactions.

[root@Mariao ~]# cd /u01/AHF/

[root@Mariao AHF]# ./ahf_setup 

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_237000_306422_2023_09_29-09_29_16.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 23.7.0 Build Date: 202307281326

Default AHF Location : /opt/oracle.ahf

Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : N

Please Enter new AHF Location : /AHF/Data

AHF Location : /AHF/Data/oracle.ahf

AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

Choose Data Directory from below options : 

1. /AHF/Data/oracle.ahf [Free Space : 12525 MB]
2. Enter a different Location

Choose Option [1 - 2] : 1

AHF Data Directory : /AHF/Data/oracle.ahf/data

Do you want to add AHF Notification Email IDs ? [Y]|N : N

Extracting AHF to /AHF/Data/oracle.ahf

Configuring TFA Services

Discovering Nodes and Oracle Resources

Not generating certificates as GI discovered

Starting TFA Services
Created symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service -> /etc/systemd/system/oracle-tfa.service.
Created symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service -> /etc/systemd/system/oracle-tfa.service.

.----------------------------------------------------------------------------.
| Host   | Status of TFA | PID    | Port | Version    | Build ID             |
+--------+---------------+--------+------+------------+----------------------+
| mariao | RUNNING       | 308232 | 5000 | 23.7.0.0.0 | 23700020230728132609 |
'--------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.---------------------------------------------------------.
|               Summary of AHF Configuration              |
+-----------------+---------------------------------------+
| Parameter       | Value                                 |
+-----------------+---------------------------------------+
| AHF Location    | /AHF/Data/oracle.ahf                  |
| TFA Location    | /AHF/Data/oracle.ahf/tfa              |
| Orachk Location | /AHF/Data/oracle.ahf/orachk           |
| Data Directory  | /AHF/Data/oracle.ahf/data             |
| Repository      | /AHF/Data/oracle.ahf/data/repository  |
| Diag Directory  | /AHF/Data/oracle.ahf/data/mariao/diag |
'-----------------+---------------------------------------'

Starting ORAchk Scheduler from AHF

Setting up AHF CLI and SDK

AHF binaries are available in /AHF/Data/oracle.ahf/bin

AHF is successfully Installed

Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

Moving /tmp/ahf_install_237000_306422_2023_09_29-09_29_16.log to /AHF/Data/oracle.ahf/data/mariao/diag/ahf/





4) And finally, check the installation.

[root@Mariao ~]# tfactl print status


[root@Mariao ~]#  tfactl toolstatus


[root@Mariao ~]# ahfctl statusahf
 


It's realy easy!

In the next post, I'll show you how to upgrade the AHF version.

So, I hope this has helped.

Regards
Mario

quarta-feira, 27 de setembro de 2023

[ORACLE] Enabling AWR Snapshots and running AWR reports from Oracle Multitentant Architecture - CDBs and PDBs

Hello everyone!!!

How are you doing?

Before the multitenant architecture, we ran the AWR snapshoits and reports in the database and that was it. All the database information was in the AWR report and done.

Today, when DBAs use the multitenant architecture, 99% of them run the AWR snapshots and reports in the CDB level and thats Ok. 

But sometimes we need a bit more accurate information, and for that we can run the AWR snapshots and reports at the PDB level. 

To do this, I'm going to setup the CDB with the steps below:

But always remember:

  1. This step-by-step worked for me, but it may not work for you.
  2. It's a basic and limited environment. The real life will be different, for sure.
  3. This post is for study and testing as well, and has no concern for performance and security best practices.
So let's go to the party!
 
1) List pdbs

SQL> @showpdb

2) Enable awr_pdb_autoflush on cdb.

SQL> SHOW PARAMETER awr_pdb_autoflush_enabled;
SQL> ALTER SYSTEM SET awr_pdb_autoflush_enabled=TRUE SCOPE=both SID='*';
SQL> ALTER SYSTEM SET awr_snapshot_time_offset=1000000 SCOPE=both SID='*';


Important: 
As the Oracle Manual show us:
"When you change the value of AWR_PDB_AUTOFLUSH_ENABLED in the CDB root, the new value takes effect in all the PDBs in the CDB.

Therefore, if you change the value of AWR_PDB_AUTOFLUSH_ENABLED in the CDB root to true, the value of AWR_PDB_AUTOFLUSH_ENABLED is also changed to true in all of the PDBs, so that automatic AWR snapshots are enabled for all the PDBs.

You can also change the value of AWR_PDB_AUTOFLUSH_ENABLED in any of the individual PDBs in a CDB, and the value that is set for each individual PDB will be honored. This enables you to enable or disable automatic AWR snapshots for individual PDBs."


For more information, click here

3) I suggest you change the default settings in the PDB:

SQL> ALTER SESSION SET CONTAINER = IRONMANPDB1;

SQL> SELECT 
  DBID, SUBSTR(SNAP_INTERVAL,1,20) as SNAP_INTERVAL, 
  SUBSTR(RETENTION,1,20) as RETENTION, TOPNSQL, CON_ID 
  FROM awr_pdb_wr_control;


SQL> 
BEGIN
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( 
                                                                                           retention => 14400, 
                                                                                           interval   => 30,
                                                                                           topnsql   => 50,
                                                                                           dbid    => 132885978);
END;
/

Retention => 7 days in minutes. The specified value must be in the range of MIN_RETENTION (1 day) to MAX_RETENTION (100 years).

Interval => Interval between each snapshot, in minutes. The specified value must be in the range MIN_INTERVAL (10 minutes) to MAX_INTERVAL (1 year).

topnsql => The number of Top SQL to flush for each SQL criteria (Elapsed Time, CPU Time, Parse Calls, Shareable Memory, Version Count). The setting will have a minimum value of 30 and a maximum value of 50,000.

dbid => Database identifier in AWR

For more information, click here

SQL> SELECT 
  DBID, SUBSTR(SNAP_INTERVAL,1,20) as SNAP_INTERVAL,  
          SUBSTR(RETENTION,1,20) as RETENTION, TOPNSQL, CON_ID 
  FROM awr_pdb_wr_control;


In my example, I'll change my interval to the minimum value, 10 minutes.

4) And now, I'm just check if the AWRs snapshots are being created.

SQL> SELECT 
                  con_id, instance_number, snap_id, 
                  SUBSTR(begin_interval_time,1,30) as begin_interval_time, 
  SUBSTR(end_interval_time,1,30) as end_interval_time 
  FROM cdb_hist_snapshot 
  ORDER BY 1,2,3;


5) And finally, to create the CDB AWR report or the PDB AWR report, we use the same script.

-- CDB
SQL> @?/rdbms/admin/awrrpt    



-- PDB
SQL> ALTER SESSION SET CONTAINER = IRONMANPDB1;
SQL> @?/rdbms/admin/awrrpt



That's it folks. As simple as that!

I hope this helps you!

Regards
Mario



quinta-feira, 31 de agosto de 2023

[ORACLE] How to configure disks for ASM using UDEV file?

Hello everyone

Do you know how to configure disks for Oracle ASM using UDEV file?

It's simple and I'll show to you!

But always remember:

  1. This step-by-step worked for me, but it may not work for you.
  2. It's a basic and limited environment. The real life will be different, for sure.
  3. This post is for study and testing as well, and has no concern for performance and security best practices.

So, let's get started!!!

1) I'll identify the disks using the root user.

    fdisk -l |grep /dev



    lsblk



2) I'll identify the "ID_SERIAL" for all disks.

    Example:
    udevadm info --query=all --name=/dev/sdb | egrep "ID_SERIAL"


3) I'll create the file "99-oracle-asmdevices.rules" in /etc/udev/rules.d.

    vim /etc/udev/rules.d/99-oracle-asmdevices.rules

    Insert a line for each disk:

    KERNEL=="sd*", SUBSYSTEM=="block",                     
    ENV{ID_SERIAL}=="VBOX_HARDDISK_VB843f4b1a-07e6004e", 
    SYMLINK+="data_0001",     OWNER="oracle", GROUP="oinstall", MODE="0660"

    Disks sdb. sdc. sdd = data_000?
    Disks sde, sdf, sdg  = fra_000?
    Disks sdh, sdi          = redo_000? 



4) And now, I need to reload the UDEV rules.

    udevadm control --reload-rules
    udevadm trigger --type=devices --action=change

    [root@csct-sp-ora01 u01]# ls -lrht /dev/data*
    [root@csct-sp-ora01 u01]# ls -lrht /dev/fra*
    [root@csct-sp-ora01 u01]# ls -lrht /dev/redo*


Ok, your disks are ready to be used in ASM!





Simple and easy, the UDEV file organizes the owners and accesses of your disks in a very clear way.

That's it folks!

I hope this post helps you!

Regards
Mario

terça-feira, 22 de agosto de 2023

[ORACLE] How can I create and check JSON schema in Oracle 23c? - Part 1

Hello everyone

I hope you're doing well!

As we know, JSON is very flexible and sometimes we need to validate the JSON structure.
Oracle 23c has an easy way to validate the JSON schema.

I'm going to access my database in docker:
docker exec -it Orcl23c /bin/bash 

I'll create a new table (I'll use the Visual Studio for better visuialization):

create table mylegos (
    id              number,
    json_data  json           validate '{
"type"      : "object",
"properties": { 
                                        "code"        : {
                                                          "type"           : "string",
  "minLength": 1,
  "maxLength": 10
     },
"collection": {
                          "type"           : "string",
  "minLength": 1,
  "maxLength": 15
     },   
"model"     : {
                          "type"           : "string",
  "minLength": 1,
  "maxLength": 20
     },
   "peaces"         : {
  "type"         : "number",
  "minimum": 0,
  "maximum": 4000
}
  },
"required" : ["code", "collection", "model", "peaces"]
}',
constraint t1_pk primary key (id)
);  


In my table, all JSON atributes are mandatory.

Now, I'll check the JSON schema:

set pages 120 lines 1000
set long 1000000
column table_name format a15
column column_name format a15
column constraint_name format a20
column json_schema format a60

select table_name, 
   column_name,
   constraint_name, 
   json_schema
from user_json_schema_columns;


And finally, I'll put some data into the table and simulated an error.

insert into mylegos (id, json_data) values 
   (1, json('
   {
"code":"42125",
"collection":"Technic",
"model":"Ferrari 488 GTE AF Corse #51",
"peaces": 1677
   }
'
)
);



Why did this happen?

I'm getting an error because the model has more than 20 characters. And it violated the "maxLength":20 constraint of the model attribute.

When I use the correct rule:

insert into mylegos (id, json_data) values 
   (1, json('
   {
 "code":"42125",
 "collection":"Technic",
 "model":"Ferrari 488 GTE AF51",
 "peaces": 1677
   }
'
)
);


select json_serialize(ml.json_data pretty) from mylegos ml;


And thats it! As simple as it sounds!

I hope this post helps you!!!

Regards
Mario

quarta-feira, 16 de agosto de 2023

[ORACLE] Oracle JSON-Relational Duality Views: Restrictions to use.

Hello, everyone.

How are you doing?

As I said in the last post, Oracle JSON-Relational Duality Views bring the best of both worlds - JSON and Relational.

If you want to see more information, click here.

But everything in the Database World has restrictions. And Duality Views are not different.

If you want to check out all the restrictions in the manual, click on the image below.


According to the manual, Duality Views have these restrictions:

Column data types:
TIMESTAMP WITH LOCAL TIME ZONE
BINARY_DOUBLE
BFILE
ROWID / UROWID
XMLTYPE
ANYTYPE
ANYDATA
ANYDATASET
HTTPURIType / XDBURIType / DBURIType
SDO_POINT_TYPE
SDO_ELEM_INFO_ARRAY
SDO_ORDINATE_ARRAY
SDO_GEOMETRY
SDO_TOPO_GEOMETRY
Object types (Abstract Data Types)

They cannot be used to create Duality Views:
System-partitioned tables
Sharded tables
Views
Materialized views
Editioning views
External tables
Hybrid partitioned tables
Global temporary tables
Private temporary tables
Remote tables (tables over database links)

Other restrictions:
Duality Views updates using database links are not supported.
MERGE SQL statements are not supported.
It is not possible to create a JSON functional index on a data column.
        It is not possible to create a JSON multivalue index on a data column.
It does not have support for JSON search index on subjacent table column.
Fine-grained auditing policies are not supported.

We still have 15 more restrictions, according to the manual.

That's it! I hope this post helps you!

Regards
Mario

segunda-feira, 14 de agosto de 2023

[ORACLE] Oracle JSON-Relational Duality Views: Bringing together the best of the relational and JSON worlds!

Hello everyone!!!

I hope you're doing well!

Last weekend I was proud to organize the 12th GUOB TECH DAY 2023. 

Moreover, I had the honor to share the stage with big names from Brazil and other countries. 

I presented a session talking about JSON-relational Duality Views in Oracle 23c.

If you were not able to attend the event, don't worry. I have posted the slides from my presentation here.

As soon as possible, I'll translate the slides form Portuguese to English, I promise.

Summarize the benefits of JSON-Relational Duality Views hopes to deliver in a few topics:

  1. Combines the benefits of the relational model (storage, consistency, efficiency) with the flexibility of JSON documents. 
  2. Document-centric. APPs can use Oracle Database API for MongoDB, Oracle REST Data Services (ORDS), JSON document functions or SQL; 
  3. Eliminate the need to use ORM (Object Relational Mapping). Native JSON support;
  4. Can "replace" groups of tables, allowing easy access and updating of data via APPs;
  5. Read and writes can use HTTP operations such as GET, PUT and POST;
  6. Apps run optimized in the database. They allow to get and store all the necessary rows of an app-tier object in a single access, unlike ORM codes that need to make several calls to the database;
  7. Advanced statistics and analytics. Advanced security features including auditing and fine-grained access control;
  8. ACID (Atomicity, Consistency, Isolation, Durability) transactions between multiple documents and tables, without worrying about the data model or transactions.


If you have any question, let me know.

Regards
Mario

quarta-feira, 19 de julho de 2023

[Oracle RAC & VMWare] Creating disks in VMWare for Oracle RAC environments.

Hello all

How are you doing?

Some time ago, I helped my teammates Patricia Flores Solano (LinkedIn) and Thiago Ferreira (LinkedIn) create a document about a POC with VMWare and Oracle RAC that they were doing for a project.

The Oracle RAC installation follows the same steps as a Bare Metal environment, but we documented the process of creating the VMWare disks, which has some tricks and tips that can help to get better performance.

But always remember:
  1. This step-by-step worked for us, but it might not work for you.
  2. This post is designed to help you. Never change any productive or test environment without first testing and making sure the solution is suitable for your environment. 
  3. I'm not responsible for that, ok?
Hope it helps. Let's get started!

1) We need to add on the sysctl.conf file.
       vm.swappiness = 1
       vm.dirty_background_ratio = 3
       vm.dirty_ratio = 80
       vm.dirty_expire_centisecs = 500 
       vm.dirty_writeback_centisecs = 100

2) Preparing Vmware Storage for Oracle RAC. 

For this, we will Shutdown the guest VM, edit VM, VM Option, Advanced , Edit configuration and Add new configuration params.

We'll change guest disk with enable.UUID=true.



3) Add PVSCSI

The "
VMware Paravirtual SCSI adapter" is a high-performance storage controller which provides greater throughput and lower CPU use. VMware Paravirtual SCSI controllers are best suited for environments running I/O-intensive applications.

You can see more information about this here.
4) Add Disk for CRS Oracle RAC


5) Add Disk for DATA Oracle RAC in vCenter (VMs  >  linux01 > Click > Edit Settings)


6) ADD NEW DEVICE > Hard Disk

7) Deploy New Hard disk* and Edit
Location: HCI_Shared_Data
New Hard disk: 1700 GB
Disk Provisioning: Thick Provision Eager Zeroed
Sharing: Multi-writer
Disk Mode: Independent - Persistent
Virtual Device Node: SCSI controller 1



8) Now, I'll change the VM Linux 2 (VMs  >  linux02 > Click > Edit Settings)



9) ADD NEW DEVICE > Existing Hard Disk


10) Now, we'll check the disk.


[root@linux01 ~]# lsblk

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sdy          65:128  0  1.7T  0 disk

sdf           8:80   0  100G  0 disk

└─sdf1        8:81   0  100G  0 part

sdo           8:224  0  1.7T  0 disk

sdw          65:96   0  1.7T  0 disk

sdd           8:48   0  100G  0 disk

└─sdd1        8:49   0  100G  0 part

sdm           8:192  0  1.7T  0 disk

sdu          65:64   0  1.7T  0 disk

sdb           8:16   0  220G  0 disk

└─sdb1        8:17   0  220G  0 part /u01

sdk           8:160  0  1.7T  0 disk

sds          65:32   0  1.7T  0 disk

sdi           8:128  0  1.7T  0 disk

sdq          65:0    0  1.7T  0 disk

sr0          11:0    1 1024M  0 rom

sdg           8:96   0  100G  0 disk

└─sdg1        8:97   0  100G  0 part

sdx          65:112  0  1.7T  0 disk

sde           8:64   0  100G  0 disk

└─sde1        8:65   0  100G  0 part

sdn           8:208  0  1.7T  0 disk

sdv          65:80   0  1.7T  0 disk

sdc           8:32   0  100G  0 disk

└─sdc1        8:33   0  100G  0 part

sdl           8:176  0  1.7T  0 disk

sdt          65:48   0  1.7T  0 disk

sda           8:0    0  110G  0 disk

─sda2        8:2    0  109G  0 part

─ol-swap 252:1    0    4G  0 lvm  [SWAP]

─ol-home 252:2    0   55G  0 lvm  /home

│ └─ol-root 252:0    0   50G  0 lvm  /

└─sda1        8:1    0    1G  0 part /boot

sdj           8:144  0  1.7T  0 disk

sdr          65:16   0  1.7T  0 disk

sdh           8:112  0  1.7T  0 disk

└─sdh1        8:113  0  1.7T  0 part

sdp           8:240  0  1.7T  0 disk


You can create as many VMDKs as needed for DATA, FRA and RECO.

After that, just proceed with the installation as usual. 

That's it, guys!!! It's very easy, isn't it?

We hope that helps!


Regards
Mario

Postagem em destaque

[Oracle] GoldenGate for Distributed Applications and Analytics (GG for DAA) & Iceberg replication

Hello everyone.   How are you doing?   I was talking to Alex Lima, Oracle GoldenGate Product Manager, today and he suggested I take a look a...