10 November, 2017

TZ parameter & sysdate

Wrong Oracle Database time information


After we had migrated our Oracle Applications 11i system to AIX 6.1 OS, we received complaints that few instances are showing wrong time information.
All concurrent programs were actually showing CET time instead of EST time.
Running a simple statement like below was fetching wrong timing information when the query was run from Application node. Though timing information was found to be correct when checking through database node.

SELECT TO_CHAR(SYSDATE, 'dd-mm-yy hh24:mi:ss') from dual;
Here, first of all, it is important to know that “sysdate” has *NOTHING* to do with DB_TIMEZONE or other timezone related date formats like systimestamp.
The SYSDATE function get time information from operating System time.
So you can not influence the sysdate values by changing database parameters.
SYSDATE returns the current date and time set for the operating system on which the database server resides.
The data type of the returned value is DATE, and the format returned depends on the value of the NLS_DATE_FORMAT initialization parameter.
The operating system parameter which  need to focus in such scenario is the ‘TZ’ parameter.
In our case we found out that the sysdate information was correct when it was checked from databse node but it was showing CET time when checked from Application node.
We checked and found that database listener was somehow not starting with correct TZ settings.
The below steps were done :
________________________________________________________
1.- Configure TZ variable on your server

export TZ=”America/New_York" (or whatever you want it to be)
echo $TZ
should show >> America/New_York
2.- Stop listener

lsnrctl stop

3.- Start listener

lsnrctl start

________________________________________________________
So starting the listener with different timezone (using TZ OS environment variable) you will be able to have different timezone databases running in same physical hardware and showing correct time.

SP2-1503: Unable to initialize Oracle call interface in Oracle 11g

When we were trying to connect to our upgraded 11.2.0.3 database, below error was thrown and we were not able to connect.
$ sqlplus as sysdba
SP2-1503: Unable to initialize Oracle call interface
SP2-0152: ORACLE may not be functioning properly

Done intial checking :
>> user oracle was in the correct database admin group (/etc/group)
>> No tnsnames issues as TNS_ADMIN variable is poiting to correct directories and all files( tnsnames.ora, sqlnet.ora, listener.ora) in it have valid entries.
>> $PATH variable was pointing to correct $ORACLE_HOME/bin
On checking the environment variable closely, it was found that one of the variable ORA_TZFILE was pointing to non-existent file.

echo $ORA_TZFILE
/u01/oracle/dev/db/tech_st/11.2.0/oracore/zoneinfo/timezlrg.dat
ls -ltr /u01/oracle/dev/db/tech_st/11.2.0/oracore/zoneinfo/timezlrg.dat
ls: /u01/oracle/dev/db/tech_st/11.2.0/oracore/zoneinfo/timezlrg.dat: No such file or directory

This file timezlrg.dat was not present at the location, though there were other files with names like
timezone_<N>.dat.

-rw-r–r– 1 oradev dba 507957 Aug 18 2011 timezlrg_2.dat
-rw-r–r– 1 oradev dba 493675 Aug 18 2011 timezlrg_1.dat
-rw-r–r– 1 oradev dba 791430 Aug 18 2011 timezlrg_14.dat
-rw-r–r– 1 oradev dba 782475 Aug 18 2011 timezlrg_13.dat
-rw-r–r– 1 oradev dba 785621 Aug 18 2011 timezlrg_12.dat
-rw-r–r– 1 oradev dba 787272 Aug 18 2011 timezlrg_11.dat
-rw-r–r– 1 oradev dba 792894 Aug 18 2011 timezlrg_10.dat


Unsetting this variable solved our issue.
$ unset ORA_TZFILE
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 16 11:45:08 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>
We can also set and use the desired timezone file and if we do not set it then by default it will use maximum value of n.
export ORA_TZFILE=$ORACLE_HOME/oracore/zoneinfo/timezlrg_<N>.dat

KEY ORACLE BACKGROUND PROCESSES RELATED TO BACKUP AND RECOVERY

The main background processes which are related to Backup & Recovery process are :
A) THE CHECKPOINT PROCESS
B) THE LOG WRITER PROCESS
C) THE ARCHIVER PROCESS
Read further for brief description on these important oracle background processes.
A) THE CHECKPOINT PROCESS (ora_ckpt_<SID>)
The checkpoint process is a key database ‘concept’ and it does three important things:
• It signals the database write process (DBWn) at each ‘checkpoint’ to process all modified buffers in the SGA buffer cache (temporary location of DB blocks) to the database data files(permanent & original location of DB blocks). After it has been done, online redo log files can be recycled.
• It updates the datafile headers with the checkpoint information (even if the file had no changed blocks).
• It updates the control files with the checkpoint information.
Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the database alert log.

What causes Checkpoint?
i) The most common reason is redo log switch.
You can switch logfile manually to check this in the alert log.
SQL> alter system switch logfile;
System altered.
Alert.log entry>>
———————————————————
Mon Feb 03 14:24:49 2014
Beginning log switch checkpoint up to RBA [0x8.2.10], SCN: 1006600
Thread 1 advanced to log sequence 8 (LGWR switch)
Current log# 2 seq# 8 mem# 0: /u01/oracle/DB11G/oradata/brij/redo02.log
Mon Feb 03 14:24:49 2014
Archived Log entry 2 added for thread 1 sequence 7 ID 0x4c45a3de dest 1:
———————————————————

ii) Checkpoints can also be forced with the ALTER SYSTEM CHECKPOINT; command. We generally do checkpoint before taking backups. At some point in time, the data that is currently in the buffer cache would be placed on disk. We can force that to happen right now with a user invoked checkpoint.

iii) There are incremental checkpoints controlled by parameters such as FAST_START_MTTR_TARGET and other triggers that cause dirty blocks to be flushed to disk.

Frequent Checkpoints usually means redo log file size is small (and it also means a slow system). But if you increase your redo log files size very high, it will also increase the mean time to recover. So a DBA should determine log file size on the basis of various factors like database type (DWH/OLTP etc), Transactions volume, database behavior as shown in alert log error messages etc.
CKPT actually took one of the earlier responsibility of LGWR. LGWR was responsible for updating the data file headers before database release 8.0. But with increasing database size and number of data files this job was givne to CKPT process.
B) THE LOG WRITER PROCESS (ora_lgwr_<SID>)
LGWR play important role of writing the data changes from redo log buffer to online redo log files. Oracle’s online redo log files record all changes made to the database in sequential manner (SCN is the counter).
Why we are multiplexing only redo log files and not the datafiles?
Oracle uses a “writeahead” protocol, meaning the logs are written to before the datafiles are. Data changes aren’t necessarily written to datafiles when you commit a transaction, but they are always written to the redo log. Before DBWR can write any of the blocks that are changed to disk, LGWR must flush the redo information related to these blocks. Therefore, it is critical to always protect the online logs against loss by ensuring they are multiplexed.

Redo log files come into play when a database instance fails or crashes. Upon restart, the instance will read the redo log files looking for any committed changes that need to be applied to the datafiles.
The log writer (LGWR) writes to the online redo files under the following circumstances:
• At each commit
• Every three seconds
• When the redo log buffer is one-third full or contains 1MB of cached redo log data
• When LGWR is asked to switch log files

Remember that the data will not reside in the redo buffer for very long. For these reasons, having an enormous (hundreds/thousands of megabytes) redo log buffer is not practical; Oracle will never be able to use it all since it pretty much continuously flushes it.
On our 11gr2 database, the size is about 6MB. The minimum size of the default log buffer is OS-dependent
SQL> show parameter log_buffer;
NAME TYPE VALUE
———————————— ———– ——————————
log_buffer integer 6266880
Or you can also see the memory value when the databse is starting
ORACLE instance started.
Total System Global Area 835104768 bytes
Fixed Size 2257840 bytes
Variable Size 536874064 bytes
Database Buffers 289406976 bytes
Redo Buffers 6565888 bytes <<<<< ~6MB
Database mounted.
Database opened.
LGWR does lots of sequential writes(fast operation) to the redo log. This is an important distinction and one of the reasons that Oracle has a redo log and the LGWR process as well as the DBWn process. DBWn does lots of scattered writes (slow operation). The fact that DBWn does its slow job in the background while LGWR does its faster job while the user waits gives us better overall performance. Oracle could just write database blocks directly to disk when you commit, but that would entail a lot of scattered I/O of full blocks, and this would be significantly slower than letting LGWR write the changes out sequentially.
Also, during Commit, the lengthiest operation is, and always will be, the activity performed by LGWR, as this is physical disk I/O. For that reason LGWR is designed such that as you are processing and generating the redo log, LGWR will be constantly flushing your buffered redo information to disk in the background. So When it came to the COMMIT, there will not have much left to do and commit will be faster.

REMEMBER that the purpose of the log buffer is to temporarily buffer transaction changes and get them quickly written to a safe location on disk (online redo log files), whereas the database buffer tries to keep blocks in memory as long as possible to increase the performance of processes using frequently accessed blocks.

C) THE ARCHIVER PROCESS (ora_arc<n>_<SID>)

Although an optional process, but should be considered mandatory for all production databases!
The job of the ARCn process is to copy an online redo log file to another location when LGWR fills it up, before they can be overwritten by new data.
The archiver background process is used only if you’re running your database in archivelog mode. These archived redo log files can then be used to perform media recovery. Whereas online redo log is used to fix the data files in the event of a power failure (when the instance is terminated), archived redo logs are used to fix data files in the event of a hard disk failure.
For example, If we lose the disk drive containing the system.dbf  data file , we can go to our old backups, restore that old copy of the file, and ask the database to apply all of the archived and online redo logs generated since that backup took place. This will catch up that file with the rest of the data files in our database, and we can continue processing with no loss of data.
ARCH copies the online redo log a bit more intelligently than how the operating system command cp or copy would do: if a log switch is forced, only the used space of the online log is copied, not the entire log file.

CHECK DATABASE IS IN ARCHIVE MODE OR NOT:
SQL> select log_mode from v$database;
LOG_MODE
————
ARCHIVELOG
CHECK NUMBER OF ARCH PROCESS IN THE DATABASE:
SQL> select * from v$archive_processes where STATUS=’ACTIVE’;
PROCESS STATUS LOG_SEQUENCE STAT ROLES
———- ———- ———— —- ————————————
0 ACTIVE 0 IDLE
1 ACTIVE 0 IDLE HEART_BEAT
2 ACTIVE 0 IDLE NO_FAL NO_SRL
3 ACTIVE 0 IDLE

New features in Database 12c

Database 12c brought with it many new features. There are improvement in many areas and new concepts. Most notable is the concept of Container database and Pluggable database.
Container Databases (CBD) and Pluggable Databases (PDB) brings with it a radical change and a major change in the core database architecture. Besides this major change there are many other improvements that Database 12c has. Some of those new features are listed below:
1) The limit of 63 ASM disk groups has been increased.


2) Oracle also allows you now to store the ASM password file in a shared ASM disk group.


3) The alter diskgroup command has been extended to include the scrub clause. This clause allows the ASM administrator to check ASM diskgroups, individual disks in a disk group, or even a single ASM files for logical corruption in cases where ASM is responsible for protecting the data. ASM can also try to repair it using mirror copies of the extent If logical corruption is detected during the scrubbing process .


4) In Oracle 12c every node in the cluster does NOT need to have its own ASM instance. Oracle Flex ASM, as the new set of features addresses this situation by removing the strict requirement to have one ASM instance per cluster node.  In this scenario, if an ASM instance to which databases are connected fail, the database will dynamically reconnect to another ASM instance in the cluster.


5) RAC crsctl & srvctl commands have a new option named -eval to evaluate commands before they are executed


6) In 12c you can now move a partition online.


7) A new command “alter database move datafile” by which it is very simple to move data and temp files  from a file system into ASM while they are in use. Earlier it was not possible to do this activity online.


8) Oracle has removed the Database Console in Oracle 12c.  It was introduced with Oracle 10g and it was not frequently used by DBAs.


9) Increase in varchar2 Limit. Instead of the previous limit of 4000 bytes for this field it is now possible to store up to 32 kilobytes. This new behavior can be controlled by max_string_size initialization parameter. Although SQL*Plus won’t be able to insert that much data as the inherent limit of SQL*Plus is 2500 characters for a column so you will need some other tool for it.

ORA-27300: OS system dependent operation:invalid_egid

In one of our EBS database we faced issue when connecting to SQLPLUS
Error message which was coming up was as below:
[oracle@dbserver bin]$ sqlplus apps/apps
Copyright (c) 1982, 2011, Oracle. All rights reserved.
ERROR:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 11 (asmadmin), current egid =
17 (oinstall)

The error was found to be related to file permission on “oracle” binary and was corrected as below:
CORRECTION
[oracle@dbserver bin]$ ls -ltr oracle
 -rwxrwxrwx 1 oracle asmadmin 232514408 Mar 7 06:53 oracle

[oracle@dbserver bin]$ chmod 6751 $ORACLE_HOME/bin/oracle

[oracle@dbserver bin]$ ls -ltr oracle
 -rwsr-s--x 1 oracle asmadmin 232514408 Mar 7 06:53 oracle

TESTING
[oracle@dbserver bin]$ sqlplus apps/apps

Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
 Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
 With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
 Data Mining and Real Application Testing options
SQL> exit

Create Database instance on Oracle Database Cloud

Creating a new database instance from Oracle Database Cloud Console is a very simple and straightforward task. Just a few clicks will give you a running database in less than 45 minutes.
Below is step-by-step procedure to install an Enterprise database version on Oracle Cloud.
From Oracle Cloud console Menu icon click on “Database.”

For the first time, a video or a tutorial may be useful. Click on Go to Console button and follow the steps in this guide.

Click on Create Service button.
Provide service name that you want to keep, subscription type. SSH public key from your machine, software release and edition, and the billing frequency. We have chosen billing frequency as monthly as we have trial cloud version so it doesn’t matter.

Fill up required details as shown below. For our testing, we have not chosen any backup and recovery policies but in real life scenarios you will decide and select backup and recovery policies too. Also we have select the entry level basic Compute machine as intent is only to do some light weight testing.
Other details are self-explanatory.

After all fields have been correctly entered, review Service Details confirmation  page, and click on Create

You should now see your new Database Cloud Service instance being created. The creation of the DBCS instance will take approximately 60 minutes. While your DBCS instance is being created, you can monitor the progress by clicking on the In Progress link, and view the current status.

Once the service is created, click on Database instance from the list of Database Services and write down the Public IP address. You will need this later to connect using SSH.

That’s it. Your database instance is created and is ready for use.
You can verify your database by connecting to the compute node ip address mentioned above and checking the database services.
For security, the Oracle Database Cloud Service is locked down. Only the SSH protocol and its port are enabled, and that still requires the use of public/private key for you to connect. To access the Database Cloud Service by your application you will need to enable additional ports. You can do this by using the Compute Cloud Service console and creating or re-using protocol definitions and Security Access Rules.
We will be connecting by Putty using public/private key that we created earlier.
Using username "opc".
Authenticating with public key "rsa-key-brijesh"
Passphrase for key "rsa-key-brijesh":
[opc@brijesh ~]$
[opc@brijesh ~]$ ps -ef|grep -i pmon
opc 1601 1562 0 23:57 pts/0 00:00:00 grep -i pmon
oracle 10016 1 0 21:24 ? 00:00:00 ora_pmon_brijesh
[opc@brijesh ~]$ sudo su - oracle
[oracle@brijesh ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 30 23:58:14 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
--------- --------------------
BRIJESH READ WRITE

Managing and Monitoring Oracle Cloud Database

In last post we created oracle cloud database by following a very simple process on the Oracle Cloud Console. We verified the database by connecting to compute node that the database is running normal.
In this post we will access some Oracle Database GUI based tool which can be used to manage/monitor the cloud database easily.
Below are the steps that we will follow in this post
STEP 1 : ACTIVATE EM EXPRESS SERVICES ON DATABASE
STEP 2 : OPEN REQUIRED PORTS TO ACCESS DATABASE APPLICATIONS
      OPTION 1: SET UP SSH TUNNEL FOR ACCESSING REQUIRED PORTS 
      OPTION 2: OPEN PORTS USING NETWORK SECURITY RULES
STEP 3: ACCESS EM DATABASE 12C CONSOLE
STEP 4: ACCESS DBaaS MONITOR CONSOLE
STEP 5: ACCESS APPLICATION EXPRESS CONSOLE

STEP 1 : ACTIVATE EM EXPRESS SERVICES ON DATABASE

We will activate the Enterprise Manager Express Database Console on both our container database named “BRIJESH”  (port 5502) and PDB1 pluggable database (port 5503). For the container database “BRIJESH” activate also port 5500 for HTTPS access.
[oracle@brijesh admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Sat Dec 31 01:26:14 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> EXEC DBMS_XDB.SETHTTPPORT(5502);
PL/SQL procedure successfully completed.

SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);
PL/SQL procedure successfully completed.

SQL> alter session set container=PDB1;
Session altered.

SQL> EXEC DBMS_XDB.SETHTTPPORT(5503);
PL/SQL procedure successfully completed.

STEP 2 : OPEN REQUIRED PORTS TO ACCESS DATABASE APPLICATIONS

For security, the Oracle Database Cloud Service is locked down. Only the SSH protocol and its port are enabled, and that still requires the use of public/private key for you to connect. To access the Database Cloud Service by your application you will need to enable additional ports. You can do this by using the Compute Cloud Service console and creating or re-using protocol definitions and Security Access Rules. Another method is to use an SSH tunnel.
We will be exploring both methods in this post. You can choose any of the option below or try one, complete testing, come back to this step and try another option too.

STEP 2 – OPTION 1: SET UP SSH TUNNEL FOR ACCESSING REQUIRED PORTS

A. Record the IP address of the Database Cloud Service.
Click on your database instance from the list of Database Services and note down the public IP Address.
B. Run Putty and create tunneling
Below step is based on the assumption that client is windows machine (like our windows 8.1 laptop).
In this step you will run Putty and set up the SSH Tunnel and Port Forwarding. By doing this we will be accessing the management tools using the SSH connection which is authenticated and encrypted, and connecting to the remote server (Cloud) as though we were connecting to your local machine.
Run Putty. In the Session Category, enter the IP address of your cloud database host. Select SSH for Connection Type. Enter Port 22
From the Connection category, expand SSH and select Auth, and browse for your private key that you created with PuttyGen.
Now select Tunnels and add all the below ports one by one
// Database Cloud Service Monitor and APEX

New Forwarded Port = 443
Destination = 129.152.145.230:443
Click Add

// Database SQL Net Access

New Forwarded Port = 1521
Destination = 129.152.145.230:1521
Click Add

// Enterprise Manager Express

New Forwarded Port = 5500
Destination = 129.152.145.230:5500
Click Add

// Enterprise Manager Express CDB

New Forwarded Port = 5502
Destination = 129.152.145.230:5502
Click Add

// Enterprise Manager Express PDB1

New Forwarded Port = 5503
Destination = 129.152.145.230:5503
Click Add
Add all ports like below:
To save the settings, go back to Session Category. Type a name for your Saved Session and click Save.
You have now created a configuration for an SSH Tunnel with Local Port Forwarding to the Cloud. Click Open to open the connection to your database.
Note that you need to keep your terminal window open to keep the SSH Tunnel session alive so that youa are able to access the management consoles through this tunnel.

STEP 2 – OPTION 2: OPEN PORTS USING NETWORK SECURITY RULES

To gain access to the consoles used by the Database Cloud Service you can also open up the port on which the console is listening without creating the SSH tunnel.
In this step you will open the port 5500 for Enterprise Manager Express using a pre-created Security Rule and Security Application from the Compute Cloud Service.
Go to the Compute Cloud Service dashboard. Click on Network > Security Rules.
Search for dbexpress security rule and click on the action menu on right side and the click ‘update’
Change Status to Enabled and click Update
Enabling this security rule will open required port and you will be able to access the em console at below link.
https://129.152.145.230:5500/em/login
Similar to allowing “EM console” we have security rules for allowing the “Apex Console” and “dbaas_monitor console”. You can enable those security rules and can access all three of these consoles which we are discussing in next step.

STEP 3: ACCESS EM DATABASE 12C CONSOLE

Open a web browser window on your machine, and navigate to
http://localhost:5502/em for the Container Database
Login with sys credentials as sysdba
You can monitor Pluggable database PDB1 from http://localhost:5503/em
Enter the following URL to access the DBaaS Enterprise Manager Console for our container database:
https://localhost:5500/em

STEP 4: ACCESS DBaaS MONITOR CONSOLE

Now access the DBaaS Monitor page using using
https://localhost:8083/dbaas_monitor
You may receive a warning the This Connection is Untrusted. If you receive this warning, expand the I Understand the Risks section and click on the Add Exception button. You will be prompted with an Add Security Exception dialog. Click on the Confirm Security Exception button:
When prompted, enter dbaas_monitor for the User Name, and for the Password enter the system password that you kept at the time of database creation.
Once connected to the Database Monitor Console, you will be able to see the console and its metrics.
DBaaS monitor console as you can see is very simple looking GUI tool and non-technical staff will probably love it because of easy readability and ability to produce technical details/data in graphical forms. This tool will very neatly show you what is the database/storage/processes/memory status, backup status, tablespace disk space status, current sessions, current waits, OS level memory/storage/process/CPU
This tool also give you access to stop/start container/pluggable database, stop/start listener, modify the dynamic init parameters.
We don’t see this tool has any email sending functionality for alerts etc like OEM 12c/13c has.

STEP 5: ACCESS APPLICATION EXPRESS CONSOLE

Enter the following URL into a browser window to access the Apex console
https://localhost/apex/pdb1/
Once the Apex login window is displayed, enter the following and click on Sign
Workspace: internal
Username: ADMIN
Password: enter password that was setup at the time of database creation
You are now connected to Apex. This ‘Apex’ as you can see is enhanced version of old Apex and it is now here used to configure and manage the DBaaS tools that the Oracle Cloud provides.
All three tools that we discussed above helps administrator in managing/monitoring the database on cloud with ease. EM console and Apex were available earlier too with the Oracle database and most DBAs are already familiar with these tools. OEM 12c or 13c is great tool to achieve what all these tools do.

Transparent Data Encryption (TDE) – Oracle Database Cloud

Transparent Data Encryption (TDE) has been a standard feature of Oracle database for quite a long time. Since database in cloud is increasingly becoming popular so this feature has taken a great importance keeping in mind that primary concerns that administrators have in keeping data in cloud (remote) servers is security.
Oracle Advanced Security Transparent Data Encryption (TDE) stops would-be attackers from bypassing the database and reading sensitive information from storage by enforcing data-at-rest encryption in the database layer. In this post we are going to demonstrate how the Transparent Data Encryption ensures that the data stored in the Oracle Database is protected and cannot be accessed from malicious users that have gained access at the OS level.
We already have 12c database service created in the Oracle Cloud for this example. Our goal is to use Transparent Data Encryption (TDE) to protect sensitive data by moving a table from an unprotected tablespace to a new tablespace encrypted with the TDE features of the Oracle Database.

STEP 1: CONNECT TO ORACLE DATABASE CLOUD SERVER

Since we have windows client so we will connect through putty

STEP 2:  CONNECT TO YOUR PLUGGABLE DATABASE

Now open a session to the PDB using sqlplus. It can be done directly using the service for pdb1 or going first into the CDB and then changing the session as shown below.
[oracle@brijesh ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Sat Dec 31 17:54:12 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> alter session set container = pdb1;

Session altered.


STEP 3: CREATE SAMPLE UNENCRYPTED TABLE

CREATE TABLE SCOTT.ACCT (
ACCTNO NUMBER(2,0),
ANAME VARCHAR2(14),
CITY VARCHAR2(13))
TABLESPACE TEST;

insert into SCOTT.ACCT (ACCTNO,ANAME,CITY) values (11,'ACCOUNT1','CHICAGO');
insert into SCOTT.ACCT (ACCTNO,ANAME,CITY) values (22,'ACCOUNT2','INDIANAPOLIS');
insert into SCOTT.ACCT (ACCTNO,ANAME,CITY) values (33,'ACCOUNT3','CALIFORNIA');
insert into SCOTT.ACCT (ACCTNO,ANAME,CITY) values (44,'ACCOUNT4','BOSTON');
commit;

-- also verify the tablespace status

COL TABLE_NAME FORMAT A25
COL TABLESPACE_NAME FORMAT A20
select TABLESPACE_NAME, STATUS, ENCRYPTED, CONTENTS from dba_tablespaces;SQL> SQL>

TABLESPACE_NAME       STATUS  ENC    CONTENTS
-------------------- --------- --- ---------------------
SYSTEM               ONLINE    NO    PERMANENT
SYSAUX               ONLINE    NO    PERMANENT
UNDOTBS1             ONLINE    NO    UNDO
TEMP                 ONLINE    NO    TEMPORARY
USERS                ONLINE    YES   PERMANENT
TEST                 ONLINE    NO    PERMANENT

As can be seen above “TEST” tablespace is unencrypted tablespace with encryption as “No”.

STEP 4: QUERY SENSITIVE DATA FROM DATABASE AND OS LEVEL DATA FILE

From Database SQL Query:
SQL> set lines 132
SQL> select * from SCOTT.ACCT;

ACCTNO          ANAME          CITY
---------- -------------- -------------
 11          ACCOUNT1        CHICAGO
 22          ACCOUNT2        INDIANAPOLIS
 33          ACCOUNT3        CALIFORNIA
 44          ACCOUNT4        BOSTON

From OS level datafile
$ strings /u02/app/oracle/oradata/brijesh/PDB1/TEST1.DBF
}|{z
6BRIJESH
TEST

--
-

TYPE
WORKSPACE_IDENTIFIER
 POOL_NAME
J&--------------------------------------
ACCOUNT4
BOSTON,
ACCOUNT3
CALIFORNIA,
ACCOUNT2
INDIANAPOLIS,
ACCOUNT1
CHICAGO
As you can see the data is visible by running simple strings OS command on the datafile. This essentially means that anybody having access to OS can read data without even connecting to database.

STEP 5: CREATE ENCRYPTED TABLESPACE USING TDE

In this step, we are going to create a new tablespace (TEST_ENCRYTPTED) encrypted with AES192
SQL> CREATE TABLESPACE "TEST_ENCRYPTED" DATAFILE '/u02/app/oracle/oradata/brijesh/PDB1/TEST_ENCRYPTED1.DBF'
     SIZE 10M LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

Tablespace created.

SQL> select TABLESPACE_NAME, STATUS, ENCRYPTED, CONTENTS from dba_tablespaces;

TABLESPACE_NAME       STATUS   ENC    CONTENTS
-------------------- --------- --- ---------------------
SYSTEM               ONLINE   NO      PERMANENT
SYSAUX               ONLINE   NO      PERMANENT
UNDOTBS1             ONLINE   NO      UNDO
TEMP                 ONLINE   NO      TEMPORARY
USERS                ONLINE   YES     PERMANENT
TEST                 ONLINE   NO      PERMANENT
TEST_ENCRYPTED       ONLINE   YES     PERMANENT

7 rows selected.
As you can see, the new TABLESPACE named TEST_ENCRYPTED is encrypted, online and permanent. Note that we didn’t mention explicitly that we meant to encrypt the tablepspace but Oracle by default created this tablespace as encrypted tablespace.
It is important to note that the power of the Oracle Cloud allows you to have the information encrypted and protected as soon as you start to use the Oracle Database Cloud Service databases.
All new tablespaces you created in a Database as a Service database are encrypted by default. The tablespaces that were created when you created the service instance are not encrypted though. In an Oracle Database 12c database, this includes the tablespaces in the root (CDB$ROOT), the seed (PDB$SEED), and the PDB that is created when the database is created.

Controlling Default Tablespace Encryption

The ENCRYPT_NEW_TABLESPACES initialization parameter controls default encryption of new tablespaces.
In Database as a Service databases, this parameter is set to CLOUD_ONLY.

Values of this parameter are as follows:

ALWAYS Any tablespace created will be transparently encrypted with the AES128 algorithm unless
       a different algorithm is specified on the ENCRYPTION clause.

CLOUD_ONLY Tablespaces created in a Database Cloud Service database will be transparently encrypted
           with the AES128 algorithm unless a different algorithm is specified on the ENCRYPTION clause. 
           For non-Database Cloud Service databases, tablespaces will only be encrypted if the ENCRYPTION 
           clause is specified. This is the default value. 

DDL  Tablespaces are not transparently encrypted and are only encrypted if the ENCRYPTION clause is specified.

STEP 6: MOVE UNENCRYPTED TABLE TO ENCRYPTED TABLESPACE

In this step we will move the SCOTT.ACCT table to our newly created encrypted tablespace “TEST_ENCRYPTED”
SQL> SELECT TABLE_NAME, TABLESPACE_NAME FROM DBA_TABLES WHERE TABLE_NAME ='ACCT';

TABLE_NAME TABLESPACE_NAME
------------------------- --------------------
ACCT TEST

SQL> ALTER TABLE "SCOTT"."ACCT" MOVE TABLESPACE "TEST_ENCRYPTED";

Table altered.

SQL> SELECT TABLE_NAME, TABLESPACE_NAME FROM DBA_TABLES WHERE TABLE_NAME ='ACCT';

TABLE_NAME TABLESPACE_NAME
------------------------- --------------------
ACCT TEST_ENCRYPTED

Table has now been move to the newly created encrypted tablespace

STEP 7: QUERY SENSITIVE DATA FROM DATABASE AND OS LEVEL DATA FILE

From Database SQL Query
set lines 132
select * from SCOTT.ACCT;

ACCTNO         ANAME        CITY
---------- -------------- -------------
 11            ACCOUNT1     CHICAGO
 22            ACCOUNT2     INDIANAPOLIS
 33            ACCOUNT3     CALIFORNIA
 44            ACCOUNT4     BOSTON

From OS level data file
[oracle@brijesh ~]$ strings /u02/app/oracle/oradata/brijesh/PDB1/TEST_ENCRYPTED1.DBF |grep -i boston
[oracle@brijesh ~]$ <no results received>

As you can see after moving the table to the newly created encrypted tablespace, the data can not be seen by OS level commands without connecting to database.
TDE although is not a new feature of Oracle database cloud but its importance has increased in Oracle database cloud and should be implemented.

SQL Net Encryption – Oracle Database Cloud

TDE that we discussed in previous post is encryption mechanism for the data in rest. In this post, we are going to discuss the security of data in motion and will check that the network encryption is enabled by default in the Database Cloud Services and the risk for your information if it’s not enabled. If you are in an On-Premise environment, you can have access to this feature with Oracle Database Enterprise Edition as part of the main core capabilities.
Also from version 12c onwards, there is no need to have an additional option license to use it.
Note that SQL Net Encryption is not a new feature of Oracle Cloud Database. It is an old feature but its importance has been augmented because of trend of Oracle databases being moved into Cloud servers.
Below are the steps that we will follow to show the capabilities of SQL Net Encryption:

STEP 1: Create a SSH connection as opc user

We will be using Putty to connect to the IP address for our Cloud Database with service name ‘brijesh’.

STEP 2: Check network encryption configuration in sqlnet.ora

Below is the content of our sqlnet.ora in the Oracle cloud database:
[oracle@brijesh admin]$ cat /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora

SQLNET.ENCRYPTION_SERVER = required
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA1)
SQLNET.CRYPTO_CHECKSUM_SERVER = required
ENCRYPTION_WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/brijesh/tde_wallet)))
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256, AES192, AES128)
NAMES.DIRECTORY_PATH = (TNSNAMES, EZCONNECT)
SQLNET.WALLET_OVERRIDE = FALSE
SQLNET.EXPIRE_TIME = 10
SSL_VERSION = 1.0
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/brijesh/db_wallet)))
Currently we have encryption enabled in our database.

STEP 3: Execute tcpdump to monitor network traffic

Now Switch to root so that you can execute the tcpdump command.
[opc@brijesh ~]$ sudo -s
[root@brijesh opc]# id
uid=0(root) gid=0(root) groups=0(root)
[root@brijesh opc]# /usr/sbin/tcpdump -Xs 1518 -i lo port 1521
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 1518 bytes
Keep this window open so that tcpdump data monitoring can be visible.

STEP 4: Query some data and check traffic in tcpdump

Connect to your pluggable database and query some data.
[oracle@brijesh ~]$ sqlplus system/<password>@PDB1
SQL*Plus: Release 12.2.0.1.0 Production on Sat Dec 31 19:40:51 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Sat Dec 31 2016 03:44:53 +00:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select * from SCOTT.ACCT;

ACCTNO       ANAME          CITY
---------- -------------- -------------
 11          ACCOUNT1     CHICAGO
 22          ACCOUNT2     INDIANAPOLIS
 33          ACCOUNT3     CALIFORNIA
 44          ACCOUNT4     BOSTON
Check the session where you leave the tcpdump running and you will notice as shown below that the network traffic is encrypted and nothing is readable in the right most column:

STEP 5: Disable network encryption

sqlnet.ora file controls the database encryption settings. To disable the network encryption, temporarily we will rename the sqlnet.ora file so that oracle database processes can’t access it.
[oracle@brijesh ~]$ cd $ORACLE_HOME/network/admin
[oracle@brijesh admin]$ mv sqlnet.ora sqlnet.ora_temp
[oracle@brijesh admin]$ ls -tlr sqlnet*
-rw-r--r-- 1 oracle oinstall 532 Dec 30 21:24 sqlnet.ora_temp

STEP 6: Query some data again and check the tcpdump output


[oracle@brijesh admin]$ sqlplus system/Welcome_123@PDB1
SQL*Plus: Release 12.2.0.1.0 Production on Sat Dec 31 19:49:43 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Sat Dec 31 2016 19:40:51 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select * from SCOTT.ACCT;

ACCTNO        ANAME        CITY
---------- -------------- -------------
 11          ACCOUNT1      CHICAGO
 22          ACCOUNT2      INDIANAPOLIS
 33          ACCOUNT3      CALIFORNIA
 44          ACCOUNT4      BOSTON

Check the tcpdump output and now you can see the table data is clearly visible in readable format in the last column of tcpdump output, confirming that the data is unprotected with the Network Encryption disabled:

STEP 7: Enable back the network encryption

We will enable back the Network Encryption again by simply restoring the sqlnet.ora file that we renamed earlier.
All the new sessions will be protected again without the need of restart services on the database.
[oracle@brijesh ~]$ cd $ORACLE_HOME/network/admin
[oracle@brijesh admin]$ mv sqlnet.ora_temp sqlnet.ora
[oracle@brijesh admin]$ ls -tlr sqlnet*
-rw-r--r-- 1 oracle oinstall 532 Dec 30 21:24 sqlnet.ora

This small example shows how important is to keep your sqlnet.ora in shape with all required encryption parameters in place.

Data encryption related Oracle parameters

We use Oracle native network encryption which gives us the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports.
Below are the extra parameters we have for the encryption/data integrity
ENCRYPTION_WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = /etc/wallets/<SID>/)))
sqlnet.encryption_server=required
sqlnet.crypto_checksum_server=required
Parameter 1 is for securing the data at ‘rest’.
Parameters 2 and 3 are related to encryption for data “in motion” .
Note that there can be many other parameters and parameters combinations. More details can be found in in this Oracle link: https://docs.oracle.com/cd/B28359_01/network.111/b28530/asoappa.htm#g638255
Detail of each parameter that we are using:
1) ENCRYPTION_WALLET_LOCATION
This is “Oracle Advanced Security Transparent Data Encryption” parameter
Transparent data encryption helps protect data stored on media in the event that the storage media or data file gets stolen. Transparent data encryption enables you to encrypt sensitive data, such as credit card numbers, stored in table columns.
2) SQLNET.ENCRYPTION_SERVER PARAMETER
This is “Oracle Advanced Security Network Encryption” parameter
This parameter specifies the desired encryption behavior when a client or a server acting as a client connects to this server.
Valid Values for this parameter:
ACCEPTED, REJECTED, REQUESTED, REQUIRED
=> ACCEPTED – The server does not request the use of encryption, but goes along if the client requests it. Compatible server parameters are  REJECTED, REQUESTED, and REQUIRED.
=> REJECTED – The server does not support the use of encryption at all. Compatible client parameters are REJECTED, ACCEPTED, and REQUESTED.
=> REQUESTED – The server prefers to use encryption, but does not force the issue if the client rejects the use of encryption. Compatible client parameters are ACCEPTED, REQUESTED, and REQUIRED.
=> REQUIRED – The server demands the use of encryption, and does not connect otherwise. Compatible client parameters are ACCEPTED, REQUESTED, and REQUIRED.
3) SQLNET.CRYPTO_CHECKSUM_SERVER PARAMETER
This is “Oracle Advanced Security Network Data Integrity” parameter
Note that the advanced security data integrity functionality is separate to network encryption. Data encryption and integrity algorithms are selected independently of each other. Encryption can be activated without integrity, and integrity can be activated without encryption
This parameter specifies the desired data integrity behavior when a client or another server acting as a client connects to this server.
Valid Values for this parameter:
ACCEPTED, REJECTED, REQUESTED, REQUIRED
=> ACCEPTED – The server does not request the use of checksums, but goes along if the client requests them. Compatible server parameters are: REJECTED, REQUESTED, and REQUIRED.
=> REJECTED – The server does not support the use of checksums at all. Compatible client parameters are REJECTED, ACCEPTED, and REQUESTED.
=> REQUESTED – The server prefers to use checksums, but does not force the issue if the client rejects their use. Compatible client parameters are ACCEPTED, REQUESTED, and REQUIRED.
=> REQUIRED – The server demands the use of checksums, and does not connect otherwise. Compatible client parameters are ACCEPTED, REQUESTED, and REQUIRED.

Useful Scripts

To Find session details using SID. set verify off col sid format 99999 col machine format a10 col program format a25 trunc col username form...