Header Image - Oracle Certified Master 11g Exam Study Guide

Category Archives

11 Articles

Configure data guard environment to reduce overheads of fast incremental backups on the primary database

9.2. Configure the data guard environment to reduce overheads of fast incremental backups on the primary database

To increase speed of incremental backup of standby database, block change tracking should be enabled. A detailed explanation has been provided in the Chapter 3 regarding the block change tracking feature.

To reduce the overhead of backup on the primary database, a standby database might be used to backup the database.

Switch to the standby database and enable block change tracking as follows:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘/home/oracle/block_chg_tracking.trc’;
Database altered.

SQL> COL FILENAME FORMAT a50
SQL> SELECT * FROM v$block_change_tracking;
STATUS FILENAME BYTES
———- ————————————————– ———-
ENABLED /home/oracle/block_chg_tracking.trc 11599872

SQL>

You might be asked to recover the standby database using incremental backup of primary database during exam. To practice this case, disable shipment of archived log files to standby database, perform some log switched and delete archived log files from the primary database.

Then take an incremental backup from the primary database and apply it to the standby database as shown in the following steps.

Reference:
High Availability -> Data Guard Concepts and Administration-> 11 Using RMAN to Back Up and Restore Files -> 11.10 Using RMAN Incremental Backups to Roll Forward a Physical Standby Database

First of all, move to the primary database, disable the archivelog shipment, create archive log gap by switching some log files and deleting archived log files from primary database as follows:

SQL> SHOW PARAMETER log_archive_dest_2
NAME TYPE VALUE
———————————— ———– ——————————
log_archive_dest_2 string service=STBDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STBDB

Check the list of archived log files on the primary database:
SQL> ALTER SESSION SET nls_date_format=’HH24:MI:SS’;
Session altered.

SQL> SELECT sequence#, first_time, next_time FROM v$archived_log ORDER BY sequence#;
SEQUENCE# FIRST_TI NEXT_TIM
———- ——– ——–
65 13:00:55 13:05:00
66 13:05:00 13:10:45
66 13:05:00 13:10:45
67 13:10:45 13:10:46
68 13:10:46 13:12:03
SQL>

Now run the same query on the standby database:
SQL> ALTER SESSION SET nls_date_format=’HH24:MI:SS’;
Session altered.

SQL> SELECT sequence#, first_time, next_time FROM v$archived_log ORDER BY sequence#;
SEQUENCE# FIRST_TI NEXT_TIM
———- ——– ——–
63 12:57:32 12:59:27
64 12:59:27 13:00:55
65 13:00:55 13:05:00
66 13:05:00 13:10:45

As you see, 2 archived log files are missing. Now we will delete all archived log files from the primary database, enable the archive log shipping and check alert log file of standby database:

[oracle@ocmnode1 rman_backup]$ cd /home/oracle/arch/
[oracle@ocmnode1 arch]$ rm -rf *.dbf
SQL> ALTER SYSTEM SET log_archive_dest_2=’service=STBDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STBDB’;
System altered.
SQL>

[oracle@ocmnode2 trace]$ tail -f alert_STBDB.log
Fetching gap sequence in thread 1, gap sequence 67-68
Tue Jun 21 13:17:56 2016
FAL[client]: Failed to request gap sequence
GAP – thread 1 sequence 67-68
DBID 654484567 branch 914436829
FAL[client]: All defined FAL servers have been attempted.

As you see, we have missing archived log files that are not available at primary database.
We either need to recreate a standby database, or apply an incremental backup of the primary database to the standby database.

To get an incremental backup, we need to get SCN of the standby database. Get the SCN value from both databases:

Primary database
SQL> SELECT TO_CHAR(current_scn) FROM v$database;
TO_CHAR(CURRENT_SCN)
———————–
924379
SQL>

Standby database
SQL> SELECT TO_CHAR(current_scn) FROM v$database;
TO_CHAR(CURRENT_SCN)
—————————————-
923999
SQL>

Switch to primary database now and get an incremental backup starting from the SCN value of the standby database:
RMAN> BACKUP INCREMENTAL FROM SCN 923999 DATABASE FORMAT ‘/home/oracle/rman_backup/incr_backup_gap_%U’;

Take backup of standby control file and move it with along the backup files to the standby host:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘/home/oracle/rman_backup/stb_control.ctl’;
Database altered.

SQL>

[oracle@ocmnode1 rman_backup]$ scp /home/oracle/rman_backup/stb_control.ctl ocmnode2:/home/oracle/rman_backup/
[oracle@ocmnode1 rman_backup]$ scp /home/oracle/rman_backup/incr_backup_gap_1* ocmnode2:/home/oracle/rman_backup/
[oracle@ocmnode1 rman_backup]$

Switch to standby, restore controlfile, mount the database and recover it:
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP NOMOUNT

[oracle@ocmnode2 admin]$ rman target /
RMAN> RESTORE CONTROLFILE FROM ‘/home/oracle/rman_backup/stb_control.ctl’;

[oracle@ocmnode2 admin]$ sqlplus / as sysdba
SQL> STARTUP FORCE NOMOUNT

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Database altered.
SQL>

RMAN> RECOVER DATABASE NOREDO;

After the recovery completed successfully, start the apply process:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Database altered.
SQL>

Switch to primary database and switch a few log files:
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.
SQL> /
System altered.
SQL>

Now check alert.log file at standby database to see that new archived log files are shipped and applied successfully:

[oracle@ocmnode2 trace]$ tail -f alert_STBDB.log
Check alert.log file at standby
Tue Jun 21 13:25:35 2016
Archived Log entry 2 added for thread 1 sequence 71 rlc 914436829 ID 0x27028757 dest 2:
RFS[2]: Opened log for thread 1 sequence 72 dbid 654484567 branch 914436829
Tue Jun 21 13:25:37 2016
Media Recovery Log /home/oracle/arch1_71_914436829.dbf
Media Recovery Waiting for thread 1 sequence 72 (in transit)
Archived Log entry 3 added for thread 1 sequence 72 rlc 914436829 ID 0x27028757 dest 2:
RFS[2]: Opened log for thread 1 sequence 73 dbid 654484567 branch 914436829
Tue Jun 21 13:25:47 2016
Media Recovery Log /home/oracle/arch1_72_914436829.dbf
Media Recovery Waiting for thread 1 sequence 73 (in transit)

▼Practice:
► Stop the archive log apply process and remove archived log files from the standby database that were not applied. Create a table on the primary database, take an incremental backup and recover it on standby database. Query the table on the standby database.

Practice Time
►15 min.
If you want to get more information on creating, configuring and managing Data Guard get my book from the following link:
https://www.amazon.com/Oracle-Certified-Master-Exam-Guide/dp/1536800791/ 

Creating and managing ACFS File System

Reference:
Documentation: Automatic Storage Management Administrator’s Guide -> 5 Introduction to Oracle
ACFS

ACFS is an ASM based clustered file system that is used to store all type of data types and is installed with Grid Infrastructure. There are different ways to create ACFS. The way how to create an ACFS file system using ASMCA utility is shown below.

ACFS is based on the volumes that are created in a disk group. So, first of all, an ASM diskgroup should be created. Then the volume will be created in an ASM diskgroup. This volume device will be used to create an ACFS. Before starting to create a new diskgroup, make sure you deleted all diskgroups that are created in the previous section.

Run asmca command to open ASM Configuration Assistant page as shown in figure 7.12.


FIGURE 7-12. ASM Configuration Assistant

Click Create button to create a new diskgroup named DF_ACFS with external redundancy and provide 2 disks as shown in Figure 7.13.

FIGURE 7-13. Creating DG_ACFS diskgroup

Now switch to Volumes tab and create a new volume based on the newly created disk group as shown in Figure 7.14.

FIGURE 7-14. Creating a Volume

Next, switch to the “ASM Cluster File Systems” tab and click Create button to create an ASM Cluster file system as in Figure 7.15.

FIGURE 7-15. Create ASM Cluster File System

Click on “Show mount command” to see the command that is needed to run to mount the ACFS on other nodes as in Figure 7.16.

FIGURE 7-16. Mount All ACFS Command

In the exam, you might be asked to create an ACFS using command line and the GUI might not be available. Check the following documentation and following steps to see how to create an ACFS using command line:

Reference:
Documentation: Automatic Storage Management Administrator’s Guide -> Basic Steps to Manage
Oracle ACFS Systems

First of all, create an ASM diskgroup by using the last two disks:

SQL> CREATE DISKGROUP dg_acfs2 EXTERNAL REDUNDANCY DISK ‘/dev/newdisk3’,’/dev/newdisk4’;
Diskgroup created.
 
SQL> CREATE DISKGROUP dg_acfs2 EXTERNAL REDUNDANCY DISK ‘/dev/newdisk3’,’/dev/newdisk4’ ATTRIBUTE ‘compatible.asm’ = ‘11.2’;
Diskgroup created.
 
SQL> ALTER DISKGROUP dg_acfs2 ADD VOLUME VL_ACFS2 SIZE 200M;
Diskgroup altered.

Run the following command to create a volume from ASMCMD utility:
ASMCMD [+] > volcreate –G data –s 200m vl_acfs2

Check if the volume device is created:
ASMCMD [+] > volinfo –G data vl_acfs2

To get more information about volume management with ASMCMD command, run it with
the help option to see the available options as follows:
[oracle@ocmnode1 ~]$ asmcmd help

< —- Output trimmed —- >
volcreate, voldelete, voldisable, volenable, volinfo
volresize, volset, volstat
[oracle@ocmnode1 ~]$
 
Query GV$ASM_VOLUME view to get information about volume devices:

SQL> SELECT inst_id, volume_name, volume_device, size_mb, usage, state FROM gv$asm_
volume ORDER BY inst_id, volume_name;
SQL> SET LINESIZE 150
SQL> COL volume_device FORMAT a30
INST_ID VOLUME_NAME VOLUME_DEVICE SIZE_MB USAGE STATE
— —————- ——————– ———- ———————- ——–
1   VL_ACFS          /dev/asm/vl_acfs-31  256         ACFS                  ENABLED
1   VL_ACFS2         /dev/asm/vl_acfs2-238 256                              ENABLED
SQL>

Now using the volume device and create a file system:
[root@ocmnode1 ~]# /sbin/mkfs -t acfs /dev/asm/vl_acfs2-238
mkfs.acfs: version = 11.2.0.1.0.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/vl_acfs2-238
mkfs.acfs: volume size = 268435456
mkfs.acfs: Format complete.

Register the new filesystem in the ACFS mount registry:
[root@ocmnode1 ~]# /sbin/acfsutil registry -a /dev/asm/vl_acfs2-238 /u01/acfs2
acfsutil registry: mount point /u01/acfs2 successfully added to Oracle Registry

Mount the filesystem:
[root@ocmnode1 ~]# /bin/mount -t acfs /dev/asm/vl_acfs2-238 /u01/acfs2

Verify if the filesystem is mounted correctly:
[root@ocmnode1 acfs2]# mount
< — Output trimmed — >
/dev/asm/vl_acfs-31 on /u01/acfs type acfs (rw)
/dev/asm/vl_acfs2-238 on /u01/acfs2 type acfs (rw)
[root@ocmnode1 acfs2]#

Resize ACFS file system

To resize ACFS file system, use acfsutil as follows:
[oracle@ocmnode1 ~]$ /sbin/acfsutil size +100m /u01/acfs

Creating ACFS Snapshots

To create a snapshot of an ACFS filesystem, “snap create” syntax is used. Create an empty
file in the ACFS folder, and create a snapshot of the filesystem.
[oracle@ocmnode1 ~]$ touch /u01/acfs2/a
[oracle@ocmnode1 ~]$ /sbin/acfsutil snap create test_snapshot /u01/acfs2
acfsutil snap create: Snapshot operation is complete.

Once you created a snapshot, a “test_snapshot” folder is created under
/u01/acfs2/.ACFS/snaps/ folder and all files are moved from the ACFS file system to this folder.
Now remove that file and restore it from the snapshot folder:

[oracle@ocmnode1 ~]$ rm –rf /u01/acfs2/a
[oracle@ocmnode1 acfs2]$ cp /u01/acfs2/.ACFS/snaps/test_snapshot/a /u01/acfs2/

Deregsitering and Dismounting ACFS file system

To cleanup the environment or just to remove the ACFS file system, follow below steps.

Deregister the ACFS:
[oracle@ocmnode1 ~]$ /sbin/acfsutil registry -d /u01/acfs

Dismount the ACFS:
[oracle@ocmnode1 ~]$ /bin/umount /u01/acfs
[oracle@ocmnode1 ~]$ /sbin/fsck -a -v -y -t acfs /dev/asm/vl_acfs-31

Remove the file system:
[oracle@ocmnode1 acfs2]$ /sbin/acfsutil rmfs /dev/asm/vl_acfs-31

Disable a Volume:
ASMCMD> voldisable -G data vl_acfs

Delete the volume:
ASMCMD> voldisable -G data vl_acfs

Practice:
Practice Time
► Create an ACFS file system using ASMCA utility ►10 min.
► Create an ACFS file system manually ►15 min.
► Deregister and Dismount an ACFS file system ►7 min.

OCM Tips and Tricks – Gather statistics on a specific table without invalidating cursors

Gather statistics on a specific table without invalidating cursors

Reference:
Oracle® Database PL/SQL Packages and Types Reference 11g Release 2 -> 140 DBMS_STATS
Rolling Cursor Invalidations with DBMS_STATS.AUTO_INVALIDATE (Doc ID 557661.1)

When gathering statistics using DBMS_STATS package, NO_INVALIDATE parameter can be used to not invalidate the cursors in the cache automatically. This parameter accepts 3 values: 1) TRUE doesn’t invalidate dependent cursors 2) FALSE invalidate dependent cursors 3) AUTO_INVALIDATE which is the default value that lets Oracle decide when to invalidate the cursors. To get the default value of NO_VALIDATE parameter, use GET_PREFS function as follows:

SQL> SELECT DBMS_STATS.GET_PREFS( ‘NO_INVALIDATE’ ) FROM dual;
DBMS_STATS.GET_PREFS(‘NO_INVALIDATE’)
————————————————————
DBMS_STATS.AUTO_INVALIDATE
SQL>

In the following example NO_INVALIDATE with FALSE option will be used to show how cursors are invalidated in the cache. Let’s flush the shared pool, create a new table and gather the statistics:

SQL> ALTER SYSTEM FLUSH SHARED_POOL;
System altered.

SQL> CREATE TABLE tbl_invalidation_test AS SELECT * FROM dba_objects;
Table created.

SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(NULL,’tbl_invalidation_test’);
PL/SQL procedure successfully completed.

Now query the table twice and check V$SQL to see count of executions, status of the cursor and count of invalidations:

SQL> SELECT COUNT(1) FROM tbl_invalidation_test;
COUNT(1)
———-
71972

SQL> /
COUNT(1)
———-
71972

SQL> SELECT child_number,
parse_calls,
executions,
object_status,
invalidations
FROM v$sql
WHERE sql_text = ‘SELECT COUNT(1) FROM tbl_invalidation_test’;

CHILD_NUMBER PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME INVALIDATIONS
———— ———– ———- ——————- ————-
0            2           2          2016-07-19/19:09:06 0

Now gather the statistics of the table with NO_INVALIDATE=>FALSE which will invalidate the cached cursors, query the table and check V$SQL view again:
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(
NULL,’TBL_INVALIDATION_TEST’,
NO_INVALIDATE => FALSE);
PL/SQL procedure successfully completed.

SQL> SELECT COUNT(1) FROM tbl_invalidation_test;
COUNT(1)
———-
71972

SQL> SELECT child_number, parse_calls, executions, object_status, invalidations
FROM v$sql WHERE sql_id = ‘002swby2jr0qz’;

CHILD_NUMBER PARSE_CALLS EXECUTIONS OBJECT_STATUS INVALIDATIONS
———— ———– ———- ——————- ————-
0            2           2          INVALID_UNAUTH      1

As it is seen from the output, the status of the cursor is INVALID_UNAUTH and the count of invalidations is 1.
To change the default value of NO_VALIDATE parameter, use SET_PREFS procedure as follows:

SQL> EXEC DBMS_STATS.SET_PARAM(‘NO_INVALIDATE’,’FALSE’);
PL/SQL procedure successfully completed.

SQL> SELECT DBMS_STATS.GET_PREFS( ‘NO_INVALIDATE’ ) FROM dual;
DBMS_STATS.GET_PREFS(‘NO_INVALIDATE’)
——–
FALSE
SQL>

Practice:
Perform the following practice in 15 min.

► Flush the shared pool, create a new table based on DBA_OBJECTS and gather table statistics. Query the table twice and get count of executions and invalidations from V$SQL view. Then gather a statistics again and invalidate the cached cursors and query V$SQL view again.

If you want to get more information on this topic, get my book from the following link:
https://www.amazon.com/Oracle-Certified-Master-Exam-Guide/dp/1536800791/ 

OCM 11g Tips and Tricks – Creating Incrementally updated backups

To speed up the recovery process during a failure, image copy backups can be updated with daily incremental backups. By taking an incremental backup, you recover it over the image copy backups. And when database is crashed, you will only need to restore the incrementally updated backup and apply changes from the redo log files. The restored backup will already contain all incremental backups applied.

Let’s try the following scenario to show how it works and make sure we can use and implement incrementally updated backup/recovery during OCM exam:

– Create an Incrementally updated backup

– Add new tablespace with new datafile, create table on that tablespace

– Take an incrementally updated backup again

– Delete the datafile and query the table

– Restore a datafile from the backup

 

First of all, enable block change tracking as follows:

 

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘/tmp/blk_change.trc’;

 

Next, connect to RMAN and take an incrementally updated backup:

 

RMAN> RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘mydb_incr_backup’ DATABASE;
RECOVER COPY OF DATABASE WITH TAG ‘mydb_incr_backup’;
}

 

This script will take a LEVEL 1 image backup of the database. Now add new tablespace, create a table and query it:

 

SQL> CREATE TABLESPACE mytbs01 DATAFILE ‘/home/oracle/oradata/mytbs03.dbf’;
SQL> CREATE TABLE tbl_test01 (id NUMBER, name VARCHAR2(10)) TABLESPACE mytbs01;
SQL> INSERT INTO tbl_test01 VALUES(1, ‘oracle’);
SQL> COMMIT;
SQL> SELECT * FROM tbl_test01;
ID NAME
---------- ----------
1          oracle

 

Run the same backup command again. This time it will take an incremental backup and recover it over the image copy backup of the database:

 

RMAN> RUN {
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG ‘mydb_incr_backup’ DATABASE;
RECOVER COPY OF DATABASE WITH TAG ‘mydb_incr_backup’;
}

 

Remove the datafile that you’ve already created, flush the buffer and query the table:

 

[oracle@ocm11g oradata] mv mytbs01.dbf mytbs01.dbf_backup
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
SQL> SELECT * FROM tbl_test01;
SELECT *FROM tbl_test01
*

ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: ‘/home/oracle/oradata/mytbs01.dbf’
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

 

As we have incrementally updated backup, we can restore it using SWITCH DATAFILE … TO COPY command as follows:

 

RMAN> LIST COPY OF DATAFILE 5;
List of Datafile Copies
=======================
Key File S Completion Time Ckp SCN Ckp Time

——- —- – ————— ———- —————

21 5 A 24-OCT-14 1458646 24-OCT-14
Name: /home/oracle/flash_recovery_area/OCM_NEW/datafile/o1_mf_mytbs01_b4mqo4k5_.dbf
Tag: MYDB_INCR_BACKUP

RMAN> SWITCH DATAFILE 5 TO COPY;
datafile 5 switched to datafile copy “/home/oracle/flash_recovery_area/OCM_NEW/datafile/o1_mf_mytbs01_b4mqo4k5_.dbf”
RMAN>

 

Now recover the datafile and query the table:

 

RMAN> RECOVER DATAFILE 5;
SQL> SELECT * FROM tbl_test01;
ID NAME
---------- ----------
1          oracle

OCM Tip of the week – Implement fine-grained access control

Implement fine-grained access control

 

Reference:

Database Security Guide -> 7 Using Oracle Virtual Private Database to Control Data Access

Database PL/SQL Packages and Types Reference -> DBMS_RLS

Understanding Fine-Grained Access Control (DBMS_RLS) on INSERT (Doc ID 99250.1)

Master Note For Oracle Virtual Private Database ( VPD / FGAC / RLS ) (Doc ID 1352641.1)

Oracle8i Fine Grained Access Control – Working Examples (Doc ID 67977.1)

Evolution of Fine Grain Access Control FGAC Feature From 8i to 10g (Doc ID 281829.1)

 

Virtual Private Database (VPD) is used to prevent a user from access to unrelated data. VPD uses DBMS_RLS package that contains fine grained access control interface to limit specific data to specific users by applying the security policy directly to the database objects and modifies the SQL statement automatically before accessing the data by adding a WHERE condition that is returned from the security policy function.

Let’s create a simple VPD and see how it works. Refer to the Chapter 7 under “Database Security Guide” documentation to get syntax of main commands. In the following example, we create a table (TBL_VPD_TEST) based on ALL_OBJECTS view under the new user USR_VPD, grant SELECT access to the second user USR_VPD_2 and create a policy to prevent the second user to query some rows.

To implement this scenario, let’s create users and a table:

 

SQL> CREATE USER USR_VPD IDENTIFIED BY usr_vpd;
User created.

SQL> CREATE USER usr_vpd_2 IDENTIFIED BY usr_vpd_2;
User created.

SQL> GRANT CONNECT, RESOURCE TO usr_vpd, usr_vpd_2;
Grant succeeded.

SQL> CONN usr_vpd/usr_vpd;
Connected.


SQL> CREATE TABLE tbl_vpd_test
AS
SELECT * FROM all_objects;
Table created.

SQL> GRANT SELECT ON tbl_vpd_test TO usr_vpd_2;
Grant succeeded.

SQL>

Now we need to have a function which generates a dynamic WHERE clause. This function must take a schema and table name as an input, must provide a return value for the WHERE clause and must generate a valid WHERE clause. The following error will appear if the WHERE clause is invalid:

 

SQL>
ERROR at line 1:
ORA-28113: policy predicate has error

 

To access user’s session information to build up a WHERE clause, the next topic comprehensively describes the context is used. Create a function as follows:

 

SQL> CREATE OR REPLACE FUNCTION hide_all_except_100 (v_schema IN VARCHAR2, v_objname IN VARCHAR2)
RETURN VARCHAR2
AS
where_clause VARCHAR2 (100);
BEGIN
IF USER = ‘USR_VPD_2’
THEN
where_clause := ‘object_id=100’;
ELSE
where_clause := ‘’;
END IF;
RETURN (where_clause);
END hide_all_except_100;
/
Function created.

SQL>

 

 

 Note: Please note that the preferred method to pass a session information to the function used in the policy is to use an application context. Detailed information is provided about an application context in the next chapter.

Now DBMS_RLS.ADD_POLICY procedure to add fine-grained access control policy to the table.

The syntax of ADD_POLICY command is as follows:

 

DBMS_RLS.ADD_POLICY (
object_schema IN VARCHAR2 NULL,
object_name IN VARCHAR2,
policy_name IN VARCHAR2,
function_schema IN VARCHAR2 NULL,
policy_function IN VARCHAR2,
statement_types IN VARCHAR2 NULL,
update_check IN BOOLEAN FALSE,
enable IN BOOLEAN TRUE,
static_policy IN BOOLEAN FALSE,
policy_type IN BINARY_INTEGER NULL,
long_predicate IN BOOLEAN FALSE,
sec_relevant_cols IN VARCHAR2,
sec_relevant_cols_opt IN BINARY_INTEGER NULL);

Detailed information about all parameters of ADD_POLICY procedure is available in the “Database PL/SQL Packages and Types Reference” chapter “110 DBMS_RLS”. Now connect with SYS user and create a new policy that prevents user USR_VPD_2 from seeing all rows except where id=100 as follows:

 

SQL> BEGIN
DBMS_RLS.add_policy (object_schema => ‘USR_VPD’,
object_name => ‘TBL_VPD_TEST’,
policy_name => ‘MY_VPD_POLICY_01’,
function_schema => ‘USR_VPD’,
policy_function => ‘HIDE_ALL_EXCEPT_100’,
statement_types => ‘SELECT’);
END;
/
PL/SQL procedure successfully completed.
SQL>

 

You can also specify rest DML command for the statement_types parameter like INSERT, UPDATE and DELETE.

Now query table with both USR_VPD and USR_VPD_2 users:

 

SQL> SHOW USER
USER is “USR_VPD”

SQL> SELECT COUNT(1) FROM usr_vpd.tbl_vpd_test;
COUNT(1)
----------
55713

SQL> CONN usr_vpd_2/usr_vpd_2
Connected.

SQL> SELECT COUNT(1) FROM usr_vpd.tbl_vpd_test;
COUNT(1)
----------
1

 

Exam tip: Remember, if you successfully create a policy, but it doesn’t work for certain reason and you need to recreate it, use DROP_POLICY procedure to drop the policy from the table. It is easy to use, simply provide the schema name, table name and policy name as follows:

 

SQL> EXEC DBMS_RLS.DROP_POLICY(‘usr_vpd’,’tbl_vpd_test’,’my_vpd_policy_01’);
PL/SQL procedure successfully completed.
SQL>

 

update_check parameter

During an exam you might be asked to create a policy that must prevent a user from making an update that violates WHERE condition of the policy. In this special case, update_check parameter must be set to TRUE. To understand how it works, check the following example. The policy we have created earlier doesn’t contain update_check parameter and the default value of this parameter is FALSE.

Let’s connect with the second user, query the table and try to update it:

 

SQL> SELECT object_id FROM usr_vpd.tbl_vpd_test;
OBJECT_ID
----------
100


SQL> UPDATE usr_vpd.tbl_vpd_test SET object_id=500;
1 row updated.
SQL>

 

Update completed successfully. If you query the table, you will get no rows:

 

SQL> SELECT object_id FROM usr_vpd.tbl_vpd_test;
no rows selected


SQL> ROLLBACK
Rollback complete.

SQL> CONN / AS SYSDBA
Connected.

SQL>

 

Make sure the second user has a grant privilege on the table:

 

SQL> GRANT UPDATE ON usr_vpd.tbl_vpd_test TO usr_vpd_2;
Grant succeeded.
SQL>

 

Next, drop the policy and recreate it:

 

SQL> EXEC DBMS_RLS.DROP_POLICY(‘usr_vpd’,’tbl_vpd_test’,’my_vpd_policy_01’);
PL/SQL procedure successfully completed.


SQL> BEGIN
DBMS_RLS.add_policy (object_schema => ‘USR_VPD’,
object_name => ‘TBL_VPD_TEST’,
policy_name => ‘MY_VPD_POLICY_01’,
function_schema => ‘USR_VPD’,
policy_function => ‘HIDE_ALL_EXCEPT_100’,
update_check => TRUE,
statement_types => ‘SELECT, UPDATE’);
END;
/
PL/SQL procedure successfully completed.
SQL>

 

Now run update command of the row that is the only row you can view:

 

SQL> SELECT object_id FROM usr_vpd.tbl_vpd_test;
OBJECT_ID
----------
100


SQL> UPDATE usr_vpd.tbl_vpd_test SET object_id=500;
UPDATE usr_vpd.tbl_vpd_test SET object_id=500
*
ERROR at line 1:
ORA-28115: policy with check option violation
SQL>

Apply specific function to each row that is being exported (REMAP_DATA)

Using REMAP_DATA parameter you can apply a function to the rows that being exported and get a remapped data as a result.

 

In the following example, we create a function that accepts a string as an input and converts the string to uppercase. Then we use this function with REMAP_DATA parameter and remap the characters of specific column of the table.

 

Create a package with one function which converts input string to uppercase and return it:

 

SQL> CREATE OR REPLACE PACKAGE usr_dptest1.pkg_remap

AS

FUNCTION make_upper (p_val VARCHAR2)

RETURN VARCHAR2;

END;

/

Package created.

SQL> CREATE OR REPLACE PACKAGE BODY usr_dptest1.pkg_remap

AS

FUNCTION make_upper (p_val VARCHAR2)

RETURN VARCHAR2

IS

BEGIN

RETURN UPPER (p_val);

END;

END;

/

Package body created.

 

Create a table with 5 rows and make values of OBJ_NAME column lowercase:

 

SQL> CREATE TABLE usr_dptest1.all_lowers

AS

SELECT LOWER (object_name) obj_name, object_id, object_typeFROM dba_objects

WHERE ROWNUM <= 5;

Table created.




SQL> SET LINESIZE 150




SQL> COL obj_name FORMAT a15




SQL> SELECT * FROM usr_dptest1.all_lowers ;

OBJ_NAME OBJECT_ID OBJECT_TYPE

--------------- ---------- -------------------

icol$ 20 TABLE

i_user1 46 INDEX

con$ 28 TABLE

undo$ 15 TABLE

c_cobj# 29 CLUSTER

 

Create a parameter file and use REMAP_DATA parameter to convert values of OBJ_NAME column to uppercase:

 

[oracle@oemgrid ~]$ more /tmp/parfile03.dat

directory=my_dir

dumpfile=exp_remap_data_01.dmp

schemas=usr_dptest1

include=table:”IN (‘ALL_LOWERS’)”

remap_data=usr_dptest1.all_lowers.obj_name:usr_dptest1.pkg_remap.make_upper

[oracle@oemgrid ~]$

Export the dump file, drop the table, import it back and query the table. All rows of column OBJ_NAME will be converted to uppercase:

 

[oracle@oemgrid ~]$ expdp system/oracle parfile=/tmp/parfile03.dat

Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_03”: system/******** parfile=/tmp/parfile03.dat

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 64 KB

Processing object type SCHEMA_EXPORT/TABLE/TABLE

. . exported “USR_DPTEST1”.”ALL_LOWERS” 5.937 KB 5 rows

Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_03” successfully loaded/unloaded

[oracle@oemgrid ~]$




SQL> DROP TABLE usr_dptest1.all_lowers ;

Table dropped.

SQL>




[oracle@oemgrid ~]$ impdp system/oracle directory=my_dir dumpfile=exp_remap_data_01.dmp

Master table “SYSTEM”.”SYS_IMPORT_FULL_01” successfully loaded/unloaded

Starting “SYSTEM”.”SYS_IMPORT_FULL_01”: system/******** directory=my_dir dumpfile=exp_remap_data_01.dmp

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . imported “USR_DPTEST1”.”ALL_LOWERS” 5.937 KB 5 rows

Job “SYSTEM”.”SYS_IMPORT_FULL_01” successfully completed at 03:37:10

[oracle@oemgrid ~]$ sqlplus / as sysdba




SQL> SET LINESIZE 150

SQL> COL obj_name FORMAT a20




SQL> SELECT * FROM usr_dptest1.all_lowers;

OBJ_NAME OBJECT_ID OBJECT_TYPE

-------------------- ---------- -------------------

ICOL$ 20 TABLE

I_USER1 46 INDEX

CON$ 28 TABLE

UNDO$ 15 TABLE

C_COBJ# 29 CLUSTER

SQL>

 

 

Encryption and Caching features of Oracle Securefile LOB segments

Encryption

You can enable block level encryption for LOB segments using ENCRYPT USING ‘encryption_algorithm” syntax. If you want to prevent the encryption explicitly, use DECRYPT option.

SecureFiles supports 4 encryption algorithms: 3DES168, AES128, AES192, AES256.

The following error will appear if you try to encrypt an object that is owned by SYS user:

ORA-28336: cannot encrypt SYS owned objects

 

The following error will appear if you didn’t create and open a wallet:

ORA-28365: wallet is not open

 

Check the following documentation to configure the wallet properly:

Reference:

Database Advanced Security Administrator’s Guide -> 3 Securing Stored Data Using Transparent Data Encryption-> 3.3.1.4 Using Transparent Data Encryption in a Multi-Database Environment

 

Edit sqlnet.ora file and specify the folder to store the wallet as follows:

 

ENCRYPTION_WALLET_LOCATION =

(SOURCE =

(METHOD = FILE)

(METHOD_DATA =

(DIRECTORY = /u01/app/oracle/product/11.2.0/dbhome_1/wallet)))

 

Make sure to create a wallet folder under $ORACLE_HOME and run the following command to create a wallet:

SQL> ALTER SYSTEM SET ENCRYPTION KEY AUTHENTICATED BY mypassword;

 

You can query V$ENCRYPTION_WALLET view to get the status of the wallet:

SQL> COL wrl_parameter format a50

SQL> SELECT status,wrl_parameter FROM V$ENCRYPTION_WALLET;

 

STATUS WRL_PARAMETER

—————— ————————————————–

OPEN /u01/app/oracle/product/11.2.0/dbhome_1/wallet

SQL>

 

Use the following command to open the wallet again after the database is restarted:

SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY mypassword;

 

Now use ENCRYPT option to encrypt the LOB segment as follows:

SQL> CREATE TABLE dbsnmp.tbl_lob_encrypt (id number, lob_data clob) LOB (lob_data)

STORE AS SECUREFILE (TABLESPACE users ENCRYPT USING ‘AES128’);

 

Table created.

SQL>

 

Caching

 

With caching feature you can bring the LOB segment to the buffer cache. You have to think twice before enabling caching option for the LOB segments, because it can flush necessary block from the buffer cache and can affect the performance of the database.

To put LOB segment to the cache, use CACHE clause as follows:

 

SQL> CREATE TABLE tbl_lob_cache (lob_data CLOB)

LOB (lob_data) STORE AS SECUREFILE (tablespace users CACHE);

Table created.

SQL>

 

To enable cache feature for a LOB segment, use ALTER TABLE .. MODIFY LOB command as follows:

 

SQL> ALTER TABLE tbl_lob_cache MODIFY LOB (lob_data)(CACHE);

Table altered.

SQL>

OCM tips – Implement securefile LOB

Reference:
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)
Application Development -> Database SecureFiles and Large Objects Developer’s Guide -> 4 Using Oracle SecureFiles LOBs

In this section we will talk about SecureFile LOB (Large Object) segments. Detailed information about creating and managing LOB segments is provided in the next chapter.

SecureFiles is a LOB storage architecture that is used to store unstructured data and contains numerous features as deduplication, compression, encryption and etc. If you want to store a LOB as a SecureFile, use STORE AS SECUREFILE clause in the CREATE TABLE statement. If you omit this clause, the LOB will be stored in a regular traditional way which is called BasicFile.

A SecureFile LOB can be created only in a tablespace managed with Automatic Segment Space Management (ASSM). The following error will appear if you attempt to create a SecureFile LOB in a non-ASSM tablespace:

ORA-43853: SECUREFILE lobs cannot be used in non-ASSM tablespace “SYSTEM”

DB_SECUREFILE parameter is used to control the action of the database with regards to the LOB storage. There are four values for this parameter:

– NEVER is used to make all LOBs that are created as a SecureFile to be created as a BasicFile LOB.
– PERMITTED is a default value that allows LOBs to be created as a SecureFile using STORE AS SECUREFILE clause.
– ALWAYS is used to make all LOBs in a ASSM tablespace be created as a SecureFile LOBs.
– IGNORE is used to ignore store as securefile clause and creation of SecureFile LOBs are permitted.

Note: Make sure the value of parameter DB_SECUREFILE is not set to NEVER and IGNORE if you are asked to create a SecureFile LOB during an exam.

Let’s create a simple table with a SecureFile LOB segment. First, create a table with BasicFile LOB as follows:

SQL> CREATE TABLE tbl_basicfile_lob
2 (id number,
3 lob_data clob)
4 LOB (lob_data) STORE AS BASICFILE;
Table created.
 
SQL> INSERT INTO tbl_basicfile_lob VALUES(1,’This is my long text’);
1 row created.
 
SQL> COMMIT;
Commit complete.
 
SQL>

Now create a table with SecureFile LOB data as follows:

SQL> CREATE TABLE tbl_securefile_lob (
2 id number, lob_data clob)
3 LOB (lob_data) STORE AS SECUREFILE (tablespace users);
Table created.
 
SQL> INSERT INTO tbl_securefile_lob VALUES(1,’This is my long text’);
1 row created.
 
SQL> COMMIT;
Commit complete.
SQL>

Query DBA_LOBS view to get more information about the lob segments:

SQL> SET LINESIZE 150
SQL> COL column_name FORMAT a10
SQL> SELECT table_name, column_name, segment_name, securefile FROM dba_lobs where table_name like ‘TBL%’;
 
TABLE_NAME COLUMN_NAM SEGMENT_NAME SEC
—————————— ———- —————————— —
TBL_BASICFILE_LOB LOB_DATA SYS_LOB0000096459C00002$$ NO
TBL_SECUREFILE_LOB LOB_DATA SYS_LOB0000096464C00002$$ YES
SQL>

If the LOB segment is stored as a SecureFile, you will be able to use the following features.

  • Compression
  • Encryption
  • Deduplication
  • Caching
  • Logging

In this newsletter, we will cover the compression feature

Compression
This feature is used to compress the LOB segment. A table compression doesn’t affect the LOB compression. It means that if the table is compressed, the LOB segment will not be compressed automatically. Oracle has two levels of compression, i.e 11gR1 – MEDIUM and HIGH. (a new LOW compression level is added at 11gR2). Check the following example to see how to compress the LOB segment and appreciate the difference between compressed
and noncompressed LOBs.

Create a table with non-compressed LOB segment and insert some data:

SQL> CREATE TABLE tbl_lob_nocompress
(
id NUMBER,
lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE
(TABLESPACE users NOCOMPRESS);
Table created.
 
SQL> INSERT INTO tbl_lob_nocompress
SELECT object_id, RPAD (‘ ‘, 5000, object_name) FROM dba_objects
WHERE ROWNUM < 500;
499 rows created.

Now create a table with high compressed lob segment, insert some data and gather statistics from both tables:

SQL> CREATE TABLE tbl_lob_compress_high
(
id NUMBER,
lob_data CLOB
)
LOB (lob_data) STORE AS SECUREFILE
(TABLESPACE users COMPRESS HIGH);
 
Table created.
SQL> INSERT INTO TBL_LOB_COMPRESS_HIGH
SELECT object_id, RPAD (‘ ‘, 5000, object_name) FROM dba_objects
WHERE ROWNUM < 500;
499 rows created.
 
SQL> EXEC DBMS_STATS.gather_table_stats(USER,’TBL_LOB_NOCOMPRESS’);
PL/SQL procedure successfully completed.
 
SQL> EXEC DBMS_STATS.gather_table_stats(USER,’TBL_LOB_COMPRESS_HIGH’);
PL/SQL procedure successfully completed.

Query DBA_LOBS view to find out the status of compression:

SQL> SELECT table_name, segment_name, securefile, compression
FROM dba_lobs
WHERE table_name LIKE ‘%COMPRESS%’;
TABLE_NAME SEGMENT_NAME SEC COMPRE
—————————— —————————— — ——
TBL_LOB_NOCOMPRESS SYS_LOB0000096719C00002$$ YES NO
TBL_LOB_COMPRESS_HIGH SYS_LOB0000096722C00002$$ YES HIGH

Now query USER_SEGMENTS view to see the size difference of compressed and noncompressed lob segments:

SQL> COLUMN segment_name FORMAT A30
SELECT segment_name, bytes
FROM user_segments
WHERE segment_name IN (‘SYS_LOB0000096719C00002$$’,’SYS_LOB0000096722C00002$$’);
 
SEGMENT_NAME BYTES
—————————— ———-
SYS_LOB0000096719C00002$$ 5373952
SYS_LOB0000096722C00002$$ 131072
SQL>

In the next newsletter, we will cover the rest SecureLob features.

If you want to get more information on managing SecureLobs, get my book from the following link:
https://www.amazon.com/Oracle-Certified-Master-Exam-Guide/dp/1536800791/ 

OCM tips – Migration of LOBs to SecureFiles

Reference:

HOW TO ONLINE CONVERT TO/FROM SECUREFILES FROM/TO BASICFILES (Doc ID 728758.1)

During an OCM exam you might be asked to convert a BasicFiles LOB to SecureFiles LOB. To perform this operation, you should use online redefinition feature. More detailed information about this feature was provided in the previous chapter. Check “Using Online Redefinition for Migrating Tables with BasicFiles LOBs” part of the documentation “Database SecureFiles and Large Objects Developer’s Guide – Chapter 4” to see an example related on how to migrate from BasicFiles to Securefiles LOB. Below, we will provide a simple example.

– Create a table with BasicFiles LOB column:

SQL> CREATE TABLE DBSNMP.tbl_basic_lob

(

  id NUMBER,

  clob_data CLOB

)

TABLESPACE users;

Table created.

SQL>

 

– Insert a few rows in it as we did in the previous example:

 

SQL> INSERT INTO DBSNMP.tbl_basic_lob

SELECT object_id, RPAD (‘ ‘, 5000, ‘This is test ‘)  FROM dba_objects WHERE ROWNUM < 3000;

2999 rows created.

SQL>

 

The following error will appear if you try to use SecureFiles LOB features on this table:

 

SQL>ALTER TABLE DBSNMP.tbl_basic_lob MODIFY LOB (clob_data) (CACHE DEDUPLICATE);

ALTER TABLE DBSNMP.tbl_basic_lob MODIFY LOB (clob_data) (CACHE DEDUPLICATE)

*

ERROR at line 1:

ORA-43856: Unsupported LOB type for SECUREFILE LOB operation

 

Create a table with SecureFiles LOB option which will be used as in interim table:

 

SQL> CREATE TABLE DBSNMP.tbl_securefiles_lob

(

id NUMBER,

clob_data CLOB

)

LOB (clob_data) STORE AS SECUREFILE (TABLESPACE users);

Table created.

SQL>

 

Start the redefinition using DBMS_REDEFINITION.START_REDEF_TABLE procedure:

 

SQL> EXEC dbms_redefinition.start_redef_table(‘DBSNMP’, ‘TBL_BASIC_LOB’, ‘TBL_SECUREFILES_LOB’,null,2);

PL/SQL procedure successfully completed.

 

Clone the dependent objects using DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure:

 

SQL> DECLARE

error_count PLS_INTEGER := 0;

BEGIN

DBMS_REDEFINITION.copy_table_dependents (‘DBSNMP’, ‘TBL_BASIC_LOB’, ‘TBL_SECUREFILES_LOB’,

1, TRUE, TRUE, TRUE, FALSE, error_count);

DBMS_OUTPUT.put_line (‘errors := ‘ || TO_CHAR (error_count));

END;

/

PL/SQL procedure successfully completed.

 

Complete the redefinition process:

 

SQL> EXEC dbms_redefinition.finish_redef_table(‘DBSNMP’, ‘TBL_BASIC_LOB’, ‘TBL_SECUREFILES_LOB’);

PL/SQL procedure successfully completed.

 

Query DBA_LOBS view to make sure that the table is migrated to SecureFiles LOB:

 

SQL> SELECT securefile FROM dba_lobs WHERE table_name = ‘TBL_BASIC_LOB’;

SEC

---

YES

 

Now try the command you failed to run before to use SecureFiles option. The command will be running successfully:

 

SQL> ALTER TABLE DBSNMP.tbl_basic_lob MODIFY LOB (clob_data) (CACHE DEDUPLICATE);

Table altered.

SQL>

 

To get more detailed information on secure lobs and managing lob segments, get the OCM preparation book from the following link:

http://www.amazon.com/Oracle-Certified-Master-Exam-Guide/dp/1536800791/

OCM Tips – Create Encrypted Tablespace

In the first section of the OCM exam (Server Configuration), there is a topic “1.3. Create and manage temporary, permanent, and undo tablespaces”. I have provided a very detailed information in the book on this topic and covered the following subjects:

• Create Permanent tablespace
• Create Compressed Tablespace
• Create Encrypted Tablespace
• Create Temporary Tablespace
• Create temporary tablespace group
• Specify Nonstandard block size for tablespace
• Modify Tablespace (OFFLINE, ONLINE, READ-ONLY, ALTER TABLESPACE, SHRINK, RENAME, DROP)
• Managing UNDO tablespace

 

In this newsletter I will show you how to create an encrypted tablespace.

In the beginning of each topic of the exam a reference from the documentation or the metalink note is provided. Check the following documentation to get more information on creating an encrypted tablespace during the exam:

Database Advanced Security Administrator’s Guide -> 8 Securing Stored Data Using Transparent

Data Encryption -> 8.2.5 Encrypting Entire Tablespaces

 

Any permanent tablespace can be encrypted to protect sensitive data. Tablespace encryption is completely transparent to your applications, and hence no application modification is required.

It’s not possible to specify encryption for undo and temporary tablespaces, as there is no need to do so, because data from the encrypted tablespace is automatically encrypted even when written to the online redo log files, undo tablespace and temporary tablespace to maximize the security.

Before attempting to create an encrypted tablespace, a wallet must be created to hold the encryption key, otherwise the following error will pop up “ORA-28365: wallet is not open”.

For this, create a wallet folder and change sql.net ora file as follows:

 

[oracle@ocm11g ~]$ cd $ORACLE_HOME

[oracle@ocm11g db_1]$ mkdir wallet

[oracle@ocm11g db_1]$ cd wallet/

[oracle@ocm11g wallet]$ pwd

/home/oracle/product/11.2.0/db_1/wallet

[oracle@ocm11g wallet]$ cd $ORACLE_HOME/network/admin

[oracle@ocm11g wallet]$ vi sqlnet.ora

ENCRYPTION_WALLET_LOCATION=

(SOURCE=

(METHOD=file)

(METHOD_DATA=

(DIRECTORY=/home/oracle/product/11.2.0/db_1/wallet)))

 

Enter the following command to protect the Oracle wallet:

 

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “mywallet”;

System altered.

SQL>

 

The following error will pop up if you run the above command without changing sqlnet.ora file:

 

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “mywallet”;

ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “mywallet”

*

ERROR at line 1:

ORA-28368: cannot auto-create wallet

SQL>

 

 

Now, create encrypted tablespace as follows:

 

SQL> CREATE TABLESPACE encrypted_tbs

DATAFILE ‘/home/oracle/oradata/encrypted_tbs01.dbf’ SIZE 100M

ENCRYPTION USING ‘AES256’

DEFAULT STORAGE(ENCRYPT);

Tablespace created.

SQL>

 

After creating an encrypted tablespace, check V$ENCRYPTED_TABLESPACES view:



SQL> SELECT tbs.name, tbs_en.encryptionalg encr_algorithm

FROM v$tablespace tbs, v$encrypted_tablespaces tbs_en

WHERE tbs.ts# = tbs_en.ts#;

NAME ENCR_ALGORITHM

------------------------------ ---------

ENCRYPTED_TBS AES256

 

If you want to get more information on creating and managing tablespaces, get my book from the following link:

https://www.amazon.com/Oracle-Certified-Master-Exam-Guide/dp/1536800791/