Tuesday 9 March 2021

Optimizer Access Paths using Indexes

    Introduction

    In this tutorial you will use the Optimizer Access Paths for the following cases (scenarios):

    • Case 1: With and Without Index
    • Case 2: Compare Single Column Index Access path
    • Case 3: Concatenated Index
    • Case 4: Bitmap Index Access
    • Case 5: Index Only Access
    • Case 6: Bitmap Index only Access
    • Case 7: B*Tree index only Access
    • Case 8: Function based index

      Hardware and Software Requirements

      The following is a list of hardware and software requirements:

      • Oracle Database 11g Enterprise Edition with access to the Tuning and Diagnostic management packs and with the sample schema installed.
      • Oracle SQL Developer 3.2.

      Prerequisites

      • Download Oracle SQL Developer 3.2 here.

      Note: For best results, use Firefox or Chrome browsers to view this tutorial.

       

      Creating a Database Connection

        The first step to using the optimizer access path is to create a database connection.
        Perform the following steps to create a database connection:

        Click the SQL Developer 3.2 icon on your Desktop to start SQL Developer.


        Your Oracle SQL Developer 3.2 opens up.



        In the Connections navigator, right-click Connections and select New Connection.
        The New / Select Database Connection dialog opens. Enter the connection details as follows and click Test.

        Connection Name: hr_conn
        Username: hr
        Password: <your_password >(Select Save Password)
        Hostname: localhost
        SID: <your_own_SID> 


        Check for the status of the connection on the left-bottom side (above the Help button). It should read Success. Click Save and then click Connect.



       

      Case 1: With and Without Index

        To view the difference in performance when using an index, versus when not, perform the following steps:

        Right-click hr_conn and select Open SQL Worksheet.

        Drop all the indexes on the employees table except  the primary key and unique key indexes ( *_PK, *_UK).

        drop index EMP_JOB_IX;
        drop index EMP_NAME_IX;
        drop index EMP_MANAGER_IX;
        drop index EMP_DEPARTMENT_IX;

        Autotrace the query.

        select * from employees where department_id = 10;

        Observe the output. You will notice there are no indexes on the employees table.

        The only possibility for the optimizer is to use a full scan to retrieve the rows. You can see that the full table scan takes a long time.


        To enhance the performance of the query in Step 1, create an index.

        create index emp_idx_dept_no on hr.employees(department_id) nologging compute statistics;


        Autotrace the query in Step 1 again and observe the output.


        You can see a significant improvement in performance. Note the difference in time, cost and physical reads.

       

      Case 2: Comparing Single Column Index Access

        To compare single column index access, perform the following steps:

        Connect to  the hr schema. Drop all the indexes on the employees table except the primary key and unique key indexes ( *_PK, *_UK).

        drop index EMP_IDX_DEPT_NO;

        Aurotrace the query:

        SELECT /*+ FULL(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000;

        Create two B*-tree indexes on department_id and salary column of the employees table.

        CREATE INDEX emp_dept_id_idx ON employees(department_id) NOLOGGING COMPUTE STATISTICS;
        CREATE INDEX emp_sal_idx ON employees(salary) NOLOGGING COMPUTE STATISTICS;

        To start monitoring the use of the employees index, run the following statements:

        ALTER INDEX emp_dept_id_idx MONITORING USAGE;
        ALTER INDEX emp_sal_idx MONITORING USAGE;

        Autotrace the query in Step 2:

        SELECT /*+ FULL(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000;

        You will notice there are no indexes on the employees table.

        The only possibility for the optimizer is to use a full scan to retrieve the rows. You can see that the full table scan takes place again.

        Now Autotrace the query:

        SELECT /*+ INDEX_COMBINE(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000;

        You will notice that this time the optimizer uses multiple indexes and combines them to access the table. The cost is lower than the full table scan.


       

      Case 3: Concatenated Index

        To view the performance of a query using concatenated index, perform the following statements:

        Connect to  the hr schema. Drop all the indexes on the employees table except the primary key index.

        drop index EMP_DEPT_ID_IDX;
        drop index EMP_SAL_IDX;

        Create a concatenated index on department_id, salary, hire_date column of the employees table.

        CREATE INDEX emp_dept_id_sal_hiredt_idx
        ON employees(department_id,salary,hire_date)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT /*+INDEX(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000
        AND hire_date between '13-JAN-07' AND '13-JAN-08';

        You will notice the optimizer uses concatenated index and the resulting cost is quite good.

        Autotrace the query:

        SELECT /*+INDEX(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000;

        The query is quite similar to the previous step, but the predicate on HIRE_DATE is removed.

        The optimizer can still use the concatenated index.

        Autotrace the query:

        SELECT /*+INDEX(e)*/e.*
        FROM employees e
        WHERE salary > 1000
        AND hire_date between '13-JAN-07' AND '13-JAN-08';

        The leading part of the concatenated index is no longer a part of the query. However, the optimizer still uses the index by using a full index scan.

       

      Case 4: Bitmap Index

        To view the performance of a query using bitmap index, perform the following statements:

        Connect to  the hr schema. Drop all the indexes on the employees table except the primary key index.

        drop index EMP_DEPT_ID_SAL_HIREDT_IDX;

        Create the following bitmap indexes:

        CREATE BITMAP INDEX emp_dept_id_bidx
        ON employees(department_id)
        NOLOGGING COMPUTE STATISTICS;

        CREATE BITMAP INDEX emp_sal_bidx
        ON EMPLOYEES(salary)
        NOLOGGING COMPUTE STATISTICS;

        CREATE BITMAP INDEX emp_hire_date_bidx
        ON employees(hire_date)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT /*+INDEX_COMBINE(e)*/e.*
        FROM employees e
        WHERE department_id = 10
        AND salary > 1000
        AND hire_date between '13-JAN-07' AND '13-JAN-08';

        You will notice that the query uses all the bitmap indexes to solve this query.

        However the cost is good. The cost is a little lower than the cost of the full table scan.

       

      Case 5: Index Access Only

        To view the performance of a query using only index access, perform the following statements:

        Connect to  the hr schema. Drop all the indexes on the employees table except the primary key index.

        drop index EMP_DEPT_ID_BIDX;
        drop index EMP_SAL_BIDX;
        drop index EMP_HIRE_DATE_BIDX;

        Create an index on the first_name and salary columns of the employees table.

        CREATE INDEX emp_last_first_name_idx
        ON employees(last_name,first_name)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT e.last_name, e.first_name
        FROM employees e;

        You will observe that the optimizer can use the index to retrieve the entire select list without accessing the table itself. The cost is good.

       

      Case 6: Bitmap Index Access Only

        To view the performance of a query using only bitmap index access, perform the following statements:

        In the Connections navigator, right-click Connections and select New Connection.
        The New / Select Database Connection dialog opens. Enter the connection details as follows and click Test.

        Connection Name: sh
        Username: sh
        Password: <your_password >(Select Save Password)
        Hostname: localhost
        SID: <your_own_SID> 


        Check for the status of the connection on the left-bottom side (above the Help button). It should read Success. Click Save and then click Connect.


        Drop all the indexes on the customers table except the primary key index and unique key index(*_PK, *_UK).


        DROP INDEX customers_gender_bix ;
        DROP INDEX cust_cust_credit_limit_idx;
        DROP INDEX cust_cust_postal_code_bidx;
        DROP INDEX emp_first_name_sal_idx;

        Create a bitmap index on the cust_credit_limit column of the customers table.

        CREATE BITMAP INDEX cust_cust_credit_limit_bidx ON CUSTOMERS(cust_credit_limit)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT count(*) credit_limit
        FROM CUSTOMERS
        WHERE cust_credit_limit=2000;

        You will notice that though salary is not a selective column, the COUNT operation on its bitmap index is very efficient.

       

      Case 7: B*Tree Index Access Only

        To view the performance of a query using only bitmap index access, perform the following statements:

        Connect to  the sh schema. Drop all the indexes on the customers table except the primary key index.

        DROP INDEX cust_cust_credit_limit_bidx;

        Create a B* Tree index on the cust_credit_limit column of the customers table.

        CREATE  INDEX cust_cust_credit_limit_idx ON CUSTOMERS(cust_credit_limit)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT count(*) credit_limit
        FROM CUSTOMERS
        WHERE cust_credit_limit=2000;


        You will notice that the optimizer uses the B*Tree index; however this is less efficient compared to the corresponding bitmap index from the previous case.

       

      Case 8: Function Based Index

        To view the performance of a query using only function based index, perform the following statements:

        Connect to  the hr schema. Drop all the indexes on the employees table except the primary key index.

        drop index EMP_SAL_BIDX;

        Create a B* Tree index on the first_name column of the employees table.

        CREATE  INDEX emp_fname_idx ON employees(first_name)
        NOLOGGING COMPUTE STATISTICS;

        Autotrace the query:

        SELECT employee_id, department_id
        FROM EMPLOYEES
        WHERE LOWER(first_name) like 's%';

        You will notice that though there is an index, it cannot be used because its column is modified by a function.

        To enhance the performance of this query, you can create a function based index:

        CREATE  INDEX emp_lower_fname_idx ON employees(LOWER(first_name));

        Autotrace the query in Step 3 again. You will notice the performance of the query is much better now.

       

      Monday 17 June 2019

      Oracle Inventory - issues

      Purpose

      This note explains about the frequently asked questions related to Central/OraInventory and Local Inventory in Oracle RDBMS.

      Questions and Answers

      The Oracle Universal Installer inventory stores information about all Oracle software products installed in all the Oracle Homes on a host, provided the product was installed using Oracle Universal Installer.
      The inventory is organized as follows:

      Central Inventory (oraInventory)
      Local Inventory (Oracle Home inventory)

      CENTRAL INVENTORY

      What is the Central Inventory? (oraInventory)

      • The Central Inventory contains the information related to all Oracle products installed on a host.
      • Each Central Inventory consists of a file called inventory.xml, which lists all the Oracle Homes installed on the node.
      • For each Oracle Home, it also lists the Oracle Home name, home index, and nodes on which the home is installed.
      •  It also mentions if the home is an Oracle Clusterware home or a removed Oracle Home.

        This file is present in the following location:
        <central inventory location>/ContentsXML/inventory.xml
        (In windows the location is C:\Program Files\Oracle\Inventory\ContentsXML\inventory.xml by default)


      What is the Central Inventory Pointer File

      • Every Oracle software installation has an associated Central Inventory where the details of all the Oracle products installed on a host are registered.
      • The Central Inventory is located in the directory that the inventory pointer file specifies.

        The following list shows the location (by default) of the inventory pointer file for various platforms:

        Linux And AIX — /etc/oraInst.loc
        Other Unix Platforms — /var/opt/oracle/oraInst.loc 


        Windows — The pointer is located in the registry key:
        \\HKEY_LOCAL_MACHINE\\Software\Oracle\inst.loc


        The following string shows an example of the path for the oraInst.loc file:

        inventory_loc=/home/oracle_db11g/product/11.1.0/db_1/oraInventory
        inst_group=oracle

      Can you keep different Central Inventory for each Oracle software installation?

      On UNIX / Linux platforms, you can have a different Central Inventory for each Oracle software installation if you want to keep each Oracle software installation unknown to another Oracle software installation.
      On Windows platforms, you can not have multiple Central Inventory locations.

      Advantages and disadvantages of keeping a Central Inventory for each Oracle Home and shared Central Inventory:

      Advantages of cental Inventory for each Oracle Home
      1) Recreation of Central Inventory will be easy as doing "attachhome" is required only once. (Only applicable to 10.2 onwards.)
      2) Corruption of the Central Inventory will only affect the particular Oracle Home
      3) Inventory operations on multiple Oracle Homes can be done in parallel.

      Disadvantages of Central Inventory for each Oracle Home
      1) Each Central Inventory requires a separate inventory pointer or editing of the inventory pointer is required.
      2) Identifying the Central Inventory and mapping it with the the correct Oracle Home may become difficult.

      Advantages of a shared Central Inventory
      1) Only one inventory pointer is required.
      2) Easy to identify and map the Oracle Home(s)

      Disadvantages of a shared Central Inventory
      1) Recreation of the Central Inventory may take more time.
      2) Corruption of the Central Inventory will affect all the Oracle Homes.
      3) The operations on the Central Inventory are performed through a locking mechanism.
      This implies that when an operation such as installation, upgrade, or patching occurs on an Oracle Home, these operations become blocked on other Oracle Homes that share the same Central Inventory.
      So Inventory operations on multiple Oracle Homes cannot be done in parallel.

      How to start the OUI/Opatch if the oraInst.loc file is not in the default location? ( Unix)

      On UNIX, if you do not want to use the Central Inventory located in the directory specified by the inventory pointer file, you can use the -invPtrLoc flag to specify another inventory pointer file.
      The syntax is as follows:

      ./runInstaller -invPtrLoc <Location_of_oraInst.loc>
      ./opatch -invPtrLoc <Location_of_oraInst.loc>

      NOTE : If the contents of the oraInst.loc file is empty, Oracle Universal Installer prompts you to create a new inventory.  This will create a new central inventory for this newly installed ORACLE_HOME.

      Can you remove or manually edit the inventory.xml file?

      • No. Oracle recommends that you do not remove or manually edit this file as it could affect installation and patching.
      • OUI lists all the Oracle Homes installed on the node using the inventory.xml file. So if the inventory.xml file is edited, this may lead to inventory corruption.

        NOTE : Removing or manually editing ANY file in the oraInventory directory is not supported by Oracle.

      You have lost/corrupted the Central Inventory but have a valid Oracle Home(s). What do you do?

      Oracle RDBMS 9.2.0.x..x and 10.1.0.x.x
      ============================
      In Oracle 9.2 / 10.1 you can restore the Central Inventory from backup or you can clone the Oracle Home(s) from an identical installation.

      Oracle RDBMS 10.2.0.x..x and 11.1.x.x.x
      ===================================
      Oracle Universal Installer allows you to set up the Central Inventory or register an existing ORACLE_HOME with the Central Inventory corruption or loss. You need to execute the command with the -attachHome flag from $ORACLE_HOME/oui/bin.

      Refer Metalink Note 556834.1 Steps To Recreate Central Inventory (oraInventory) In RDBMS Homes

      How can you remove the Central Inventory after all the Oracle Homes on a host are removed?

      Refer Metalink Note 473373.1 How To Remove The Central Inventory After All The Oracle Homes Are Removed?

      Can you use the jar command to backup inventory/Oracle Home?

      Do not use the jar command to zip the Oracle Home, as this causes the file permissions to become lost.

      How can you attach and detach Oracle Homes from the Central Inventory ?

      Refer Metalink Note 458893.1 Oracle Universal Installer (OUI) FAQ

      Does the database work without a Central/Local Inventory?

      Yes, the database continues to work without a Central/Local Inventory.
      However, you will not be able to do any operations using OUI/Opatch on tha Oracle Home (ie upgrade, patching, etc ...)

      What are the permissions/ownership required for a Central Inventory?

      The permissions required for Central Inventory are 755 (rwxr-xr-x)
      Central Inventory should be owned by the owner of the Oracle installations.


      Can you move the Central Inventory (oraInventory) to another location?

      Yes, you can move the Central Inventory (oraInventory) to another location on UNIX
      Refer Metalink Note 299260.1 How To Move The Central Inventory To Another Location

      Does Oracle Support Manually Changing "inst_loc" In The Windows Registry?

      Manually changing the "inst_loc" string in the Windows registry (under HKEY_LOCAL_MACHINE\Software\Oracle or HKLM\Software\Oracle) is officially unsupported.
      Refer Metalink Note 552502.1 Support For Manually Changing "inst_loc" In The Windows Registry

      LOCAL INVENTORY


      What is the Local Inventory ?

      • Oracle Home Inventory or Local Inventory is present inside each Oracle Home.
      • It only contains information relevant to a particular Oracle Home.
        It is located in the following location: 
                                   $ORACLE_HOME/inventory

      What are the Contents of the Local Inventory ?

      Components File
      • This file contains the details about third-party applications like Java Runtime Environment (JRE) required by different Java-based Oracle tools and components.
      • In addition, it also contains details of all the components as well as patchsets or interim patches installed in the Oracle Home.
        This file is located here:
        ORACLE_HOME/inventory/ContentsXML/comps.xml

      Home Properties File
      • This file contains the details about the node list, the local node name, and the CRS flag for the Oracle Home.
      • In a shared Oracle Home, the local node information is not present.
      • This file also contains the following information:
        GUID — Unique global ID for the Oracle Home
        ARU ID — Unique platform ID. The patching and patchset application depends on this ID.
        ARU ID DESCRIPTION — Platform description

        NOTE : The information in oraclehomeproperties.xml overrides the information in inventory.xml. This file is located here:
      Other folders
      Folder NameDescription
      Scripts Contains the scripts used for the cloning operation
      ContentsXML Contains the details of the components and libraries installed
      Templates Contains the template files used for cloning.
      oneoffs Contains the details of the one-off patches applied


      You have lost the Oracle Home (local) Inventory. What can you do?

      * Oracle recommends backing up the inventory when an Oracle Home is installed or removed.
      * Ensure that you back up the comps.xml having the latest time stamp.
      * When your Oracle Home Inventory is corrupted or lost, you can restore from backup or you can re-install the Oracle Home. (Note: When restoring from backup you must make sure that the backup is from a time after the last patch/patchset was installed. Failure to heed this step can lead to missing patches/patchsets not being listed in the inventory)
      * Rebuilding the Oracle Home inventory is not possible
      * NOTE : Manually editing ANY files in the inventory directory is not supported by Oracle.

      What are the permissions/ownership required for a Local Inventory?

      The permissions required for Local Inventory are:

      9.2  755 (drwxr-xr-x)
      10.1 755 (drwxr-xr-x)
      10.2 750 (drwxr-x---)
      11.1 750 (drwxr-x---)
      11.2 770 (drwxrwx---)
      12.1 750 (drwxr-x---)  

      Thursday 21 March 2019

      Far - synch datagrard configuration

      Relevant configuration parameters
      for this example include:

      Primary (primary):
      --------------------------------------------
      log_archive_dest_2= service="farsync", SYNC NOAFFIRM delay=0 optional compression=disable
      max_failure=1 max_connections=1 reopen=5 db_unique_name="farsync" net_timeout=8,
      alternate=LOG_ARCHIVE_DEST_3 valid_for=(online_logfile,all_roles)

      log_archive_dest_3=service="standby", ASYNC NOAFFIRM delay=0 optional compression=disable
      max_failure=1 max_connections=1 reopen=5 db_unique_name="standby"
      alternate=LOG_ARCHIVE_DEST_2 valid_for=(online_logfile,all_roles)

      log_archive_dest_state_2=ENABLE
      log_archive_dest_state_3=ALTERNATE
      log_archive_config=dg_config=(primary, farsync, standby)
      fal_server= standby

      »
      Primary Far Sync “A” (farsync)

      log_archive_dest_2=”service="standby", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=5 db_unique_name="standby" net_timeout=8, valid_for=(standby_logfile,all_roles)

      log_archive_dest_state_2=ENABLE
      log_archive_config=dg_config=(primary, farsync, standby)
      fal_server=primary

      »
      Standby (standby):
      -----------------------------------------
      log_archive_dest_2= service="primary" ASYNC reopen=5 db_
      unique_name="primary"
      valid_for=(online_logfile,all_roles)
      log_archive_dest_state_2
      =ENABLE
      log_archive_config
      =
      dg_config=(
      primary,farsync, standby)
      fal_server=farsync, primary


      https://oracledba.blogspot.com/2016/11/using-standby-as-alternate-for-far-sync.html
      https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf

      SELECT p1.value|| '/' || p2.value || '_ora_'  || p.spid  ||'.trc'  file_name
      FROM
      v$process p,
      v$session s,
      v$parameter p1,
      v$parameter p2
      WHERE p1.name = 'user_dump_dest'
      AND p2.name = 'db_name'
      AND p.addr = s.paddr
      AND s.audsid = USERENV ('SESSIONID');

      Monday 20 March 2017

      Switchover of databse

      Tasks
      Take an archive log backup 
      Verify succesful completion of RMAN Backup of  database on server
      Make sure standby DB is in synch with the primary and there is no error on DGMGRL "show configuration" output 
      Take count of Total number of object and invalid objects in the Database order by owner (paste the output in the sheets: Invalid-Object)
      Comment out any cron entries to make sure no jobs run that may impact the export activity

      inform to application  team to bring down application related to db
      perform couple of log switches 
      Switch over the prirmary database using dgmglr
      Check if the db service is up and registered with the listener on  server 
      Using DGMGRL verify the DG setup db and there should not be any error
      Validate the alert logs . 
      inform to application team to start the application  and wait for confirmation and sign-off.
       
      Take count of Total number of object and invalid objects in the Database order by owner. The numbers should not be different than what was taken before.
      Uncomment/comment relevant cron entries. 
      Make sure there is no issue for app users to connect using TNS and  any testing and sign-off.
      Validate the database Status from OEM 
      Configure the full backup on  server for db database
       
      switch back to source database 

      Saturday 3 September 2016

      Skip tablespaces while restore and recovery of database

      NAME
      ------------------------------
      SYSTEM
      SYSAUX
      UNDOTBS1
      USERS
      TEMP
      EXAMPLE
      DEV_IAS_ORASDPM
      DEV_MDS
      DEV_SOAINFRA
      DEV_IAS_TEMP
      DEV_ORABAM

      NAME
      ------------------------------
      DEV_PORTAL_LOG
      DEV_PORTAL_DOC
      DEV_PORTAL_IDX
      DEV_IAS_PORTLET
      DEV_PORTAL

      16 rows selected.


      Rman> connect target u/p@s

      Rman>
      set until time 
      "to_date('2011-12-30:00:00:00', 'yyyy-mm-dd:hh24:mi:ss')"; 

      Rman> restore database skip tablespace
      'DEV_IAS_ORASDPM','DEV_SOAINFRA',
      'DEV_IAS_TEMP','DEV_ORABAM','DEV_PORTAL_LOG',
      'DEV_PORTAL_DOC','DEV_PORTAL_IDX',
      'DEV_IAS_PORTLET','DEV_PORTAL';

      Rman> recover database skip tablespce 'DEV_IAS_ORASDPM','DEV_SOAINFRA',
      'DEV_IAS_TEMP','DEV_ORABAM','DEV_PORTAL_LOG',
      'DEV_PORTAL_DOC','DEV_PORTAL_IDX','DEV_IAS_PORTLET','
      DEV_PORTAL';

      Sql>alter database open resetlogs;

      Sql>select name, status from v$datafile;


      First get metadata for schema and then take a expdp backup.

      Scp ur backup to target db server.

      Create a schema and then impdp.

      Compile invalid objects if any.


      RMAN Command cheat sheet

      RMAN> list backupset;
      RMAN> list backupset of database;
      RMAN> list backupset of database completed before '22-NOV-00';
      RMAN> list backupset of archivelog all;
      RMAN> list backupset of tablespace users;
      RMAN> list backupset of datafile 1;
      RMAN> list backup summary;
      RMAN> list backup;
      RMAN> list backup of controlfile;
      RMAN> list backup of tablespace SYSTEM;
      RMAN> list backup by file;
      RMAN> list backup of archivelog all ;
      RMAN> list copy of database;
      RMAN> list copy of database archivelog all;
      RMAN> allocate channel for maintenance type disk;
      RMAN> configure channel device type disk clear ;
      RMAN> report need backup days=2 database;
      RMAN> report need backup days=10 tablespace TEMP;
      RMAN> report need backup days=4 
                                 datafile 'D:\ORACLE\ORADATA\OR816\TEMP01.DBF';
      RMAN> report schema;
      RMAN> report obsolete;
      RMAN> crosscheck backup;
      RMAN> crosscheck backupset;
      RMAN> crosscheck copy;
      RMAN> crosscheck archivelog all;
      RMAN> crosscheck controlfilecopy '/ora01/oracle/admin/BACKUP/demo/
                                        demo_controlfile_bak_03-15-11_19:59:11' ;
      RMAN> delete noprompt expired backup ;
      RMAN> delete noprompt obsolete;
      RMAN> delete archivelog all;
      RMAN> delete expired archivelog all;
      RMAN> delete archivelog all completed before 'sysdate -1';
      RMAN> delete noprompt archivelog until time 'sysdate - 1';
      RMAN> delete archivelog all backed up 1 times to device type disk 
                                       completed before 'sysdate-1';

      Monday 15 August 2016

      Types of Locks Within Oracle

      Oracle provides the following three main kinds of locks:

      DML locks
      DDL locks
      Internal locks and latches


      DML Locks

      DML locks or data locks guarantee the integrity of data being accessed concurrently by multiple users. DML locks help to prevent damage caused by interference from simultaneous conflicting DML or DDL operations. By default, DML statements acquire both table-level locks and row-level locks.

      The reference for each type of lock or lock mode is the abbreviation used in the Locks Monitor from Oracle Enterprise Manager (OEM). For example, OEM might display TM for any table lock within Oracle rather than show an indicator for the mode of table lock (RS or SRX).

      Row Locks (TX)

      Row-level locks serve a primary function to prevent multiple transactions from modifying the same row. Whenever a transaction needs to modify a row, a row lock is acquired by Oracle.

      There is no hard limit on the exact number of row locks held by a statement or transaction. Also, unlike other database platforms, Oracle will never escalate a lock from the row level to a coarser granular level. This row locking ability provides the DBA with the finest granular level of locking possible and, as such, provides the best possible data concurrency and performance for transactions.

      The mixing of multiple concurrency levels of control and row level locking means that users face contention for data only whenever the same rows are accessed at the same time.  Furthermore, readers of data will never have to wait for writers of the same data rows. Writers of data are not required to wait for readers of these same data rows except in the case of when a SELECT... FOR UPDATE is used.

      Writers will only wait on other writers if they try to update the same rows at the same point in time. In a few special cases, readers of data may need to wait for writers of the same data. For example, concerning certain unique issues with pending transactions in distributed database environments with Oracle.

      Transactions will acquire exclusive row locks for individual rows that are using modified INSERT, UPDATE, and DELETE statements and also for the SELECT with the FOR UPDATE clause.

      Modified rows are always locked in exclusive mode with Oracle so that other transactions do not modify the row until the transaction which holds the lock issues a commit or is rolled back. In the event that the Oracle database transaction does fail to complete successfully due to an instance failure, then Oracle database block level recovery will make a row available before the entire transaction is recovered. The Oracle database provides the mechanism by which row locks acquire automatically for the DML statements mentioned above.

      Whenever a transaction obtains row locks for a row, it also acquires a table lock for the corresponding table. Table locks prevent conflicts with DDL operations that would cause an override of data changes in the current transaction.

      Table Locks (TM)

      What are table locks in Oracle? Table locks perform concurrency control for simultaneous DDL operations so that a table is not dropped in the middle of a DML operation, for example. When Oracle issues a DDL or DML statement on a table, a table lock is then acquired. As a rule, table locks do not affect concurrency of DML operations. Locks can be acquired at both the table and sub-partition level with partitioned tables in Oracle.

      A transaction acquires a table lock when a table is modified in the following DML statements: INSERT, UPDATE, DELETE, SELECT with the FOR UPDATE clause, and LOCK TABLE. These DML operations require table locks for two purposes: to reserve DML access to the table on behalf of a transaction and to prevent DDL operations that would conflict with the transaction.

      Any table lock prevents the acquisition of an exclusive DDL lock on the same table, and thereby prevents DDL operations that require such locks. For example, a table cannot be altered or dropped if an uncommitted transaction holds a table lock for it.

      A table lock can be held in any of several modes: row share (RS), row exclusive (RX), share (S), share row exclusive (SRX), and exclusive (X). The restrictiveness of a table lock's mode determines the modes in which other table locks on the same table can be obtained and held.

      Sunday 14 August 2016

      Change or switch undo and Temp tablespace in Oracle database

      Change or switch undo tablespace in Oracle database :
      ---------------------------------------------------------

            The complexity comes in if there were pending transactions in your old undo tablespace and you have already switched to the new one and when you try to drop the old one it says "Tablespace is in use".

      How to switch the database to a new UNDO tablespace and drop the old one
      $ sqlplus / as sysdba

      SQL> show parameter undo

      NAME                                 TYPE        VALUE
      ------------------------------------ ----------- ----------------------
      undo_management       string      AUTO
      undo_retention               integer    900
      undo_tablespace           string      UNDOTBS1


      The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1. Leave this sysdba as is, open another console, log in as user SCOTT and initiate a transaction.

      $ sqlplus scott/tiger

      SQL>update emp set sal = sal + 1000 where empno=7839;
      1 row updated.

      With an update on emp table we have initiated a transaction. The undo data is written to a segment in the UNDOTBS1 tablespace. Now leave this SCOTT's session intact and go back to the sysdba console without issuing any COMMIT or ROLLBACK.

      SQL>CREATE UNDO TABLESPACE undotbs2
      DATAFILE '/d01/apps/oradata/oraxpo/undotbs201.dbf'
               SIZE 50M AUTOEXTEND ON NEXT 5M;

      Tablespace created.

      -- We created a new UNDO tablespace named UNDOTBS2

      SQL>ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

      System altered.

      -- Switch the database to the new UNDO tablespace.

      SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
      DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
      *
      ERROR at line 1:
      ORA-30013: undo tablespace 'UNDOTBS1' is currently in use

      -- Try to drop the tablespace but failed.

      With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any new transaction's undo data will go to the new tablespace i.e. UNDOTBS2. 
      But the undo data for already pending transaction (e.g. the one initiated by SCOTT before the database UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it is there you cannot drop the old tablespace.

      set lines 10000
      column name format a10

      SELECT a.name,b.status
      FROM   v$rollname a,v$rollstat b
      WHERE  a.usn = b.usn
      AND    a.name IN (
       SELECT segment_name
       FROM dba_segments
       WHERE tablespace_name = 'UNDOTBS1'
      );

      NAME            STATUS
      ----------            ---------------
      _SYSSMU8$  PENDING OFFLINE

      The above query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status. Now lets see which users/sessions are running this pending transaction.

      column username format a6

      SELECT a.name,b.status , d.username , d.sid , d.serial#
      FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
      WHERE  a.usn = b.usn
      AND    a.usn = c.xidusn
      AND    c.ses_addr = d.saddr
      AND    a.name IN (
       SELECT segment_name
       FROM dba_segments
       WHERE tablespace_name = 'UNDOTBS1'
      );

      NAME            STATUS               USERNAME         SID    SERIAL#
      ----------    --------------     ------ ---------- ----------
      _SYSSMU8$  PENDING OFFLINE SCOTT         147          4

      So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However, if this is not possible (say the user initiated the transaction and left for annual leave :) and trust me this happens) you may go ahead and kill the session to release the undo segments in the UNDOTBS1 tablespace.

      SQL> alter system kill session '147,4';

      System altered.

      SELECT a.name,b.status , d.username , d.sid , d.serial#
      FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
      WHERE  a.usn = b.usn
      AND    a.usn = c.xidusn
      AND    c.ses_addr = d.saddr
      AND    a.name IN (
       SELECT segment_name
       FROM dba_segments
       WHERE tablespace_name = 'UNDOTBS1'
      );
      no rows selected

      As we can see once the session is kills we don't see anymore segments occupied in the UNDOTBS1 tablespace. Lets drop UNDOTBS1.

      SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
      DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
      *
      ERROR at line 1:
      ORA-30013: undo tablespace 'UNDOTBS1' is currently in use
      If you are retaining undo data then you still won't be able to drop the tablespace because it is still in use by undo_retention. Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

      -- After 15 minutes.
      SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

      Tablespace dropped.



      How to drop and recreate TEMP Tablespace in Oracle 9i/10g/11g:
      ----------------------------------------------------------------

      1. Create Temporary Tablespace Temp

      CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE  ‘/u01/app/oradata/temp01′ SIZE 2000M, ‘/u01/app/oradata/temp02′ SIZE 2000M';

      2. Move Default Database temp tablespace

      ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;

      3. Make sure No sessions are using your Old Temp tablespace

         a.  Find Session Number from V$SORT_USAGE:
             SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;

         b.  Find Session ID from V$SESSION:

             If the resultset contains any tows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous resultset.

             SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM;
             OR
             SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR;

          c.  Kill Session:

          Now kill the session with IMMEDIATE.

          ALTER SYSTEM KILL 'SID,SERIAL#' IMMEDIATE;

      4. Drop temp tablespace

      DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

      5. Recreate Tablespace Temp

      CREATE TEMPORARY TABLESPACE TEMP TEMPFILE /u01/app/temp/temp01′ SIZE 2000M;

      6 Move Tablespace Temp, back to new temp tablespace

      ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

      7. Drop temporary for tablespace temp

      DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

       No need to do shutdown when drop temp tablespace and the recreate it. If something happen with temp tablespaces e.g. : crash, corrupt, etc. Oracle database will ignore the error, but DML (insert,update,delete) and SELECT Query will suffer.

      Optimizer Access Paths using Indexes

      Introduction In this tutorial you will use the Optimizer Access Paths for the following cases (scenarios): Case 1: With and Without Index Ca...