Friday, 31 August 2018

With Clause

With Clause:

With Clause works as temporary table while writing queries. We use  this clause simplify our queries when we use aggregate function in queries.


Syntax:

With <any_name> as 
(sql query)

eg:

with avg_sal as (
select avg(sal) avgsal,deptno from emp
group by deptno)
select ename,empno,e.deptno ,dname,avgsal from emp e join dept D1 on d1.deptno  = e.deptno

join avg_Sal a1 on a1.deptno  = e.deptno


here avg_sal works as temporary table for this query and used in join as well to get avgsal column from table.

SQL> with avg_sal as (
  2  select avg(sal) avgsal,deptno from emp
  3  group by deptno)
  4  select ename,empno,e.deptno ,dname,avgsal from emp e join dept D1 on d1.deptno  = e.deptno
  5  join avg_Sal a1 on a1.deptno  = e.deptno
  6  ;
 
ENAME      EMPNO DEPTNO DNAME              AVGSAL
---------- ----- ------ -------------- ----------
MILLER      7934     10 ACCOUNTING         3487.5
KING        7839     10 ACCOUNTING         3487.5
CLARK       7782     10 ACCOUNTING         3487.5
VENKY       1234     10 ACCOUNTING         3487.5
FORD        7902     20 RESEARCH             2175
ADAMS       7876     20 RESEARCH             2175
SCOTT       7788     20 RESEARCH             2175
JONES       7566     20 RESEARCH             2175
SMITH       7369     20 RESEARCH             2175
JAMES       7900     30 SALES          1566.66666
TURNER      7844     30 SALES          1566.66666
BLAKE       7698     30 SALES          1566.66666
MARTIN      7654     30 SALES          1566.66666
WARD        7521     30 SALES          1566.66666
ALLEN       7499     30 SALES          1566.66666
 
15 rows selected
 

Friday, 22 June 2018

Working with FORALL (BULK DML) and Understanding SAVE EXCEPTIONS

FORALL:
We will use FORALL statement , when we want do DML (insert,update,delete) operation with BULK COLLECT.

Its much faster than normal equivalent FOR -LOOP. 


PLSQL Guide tells as below:

  • FORALL statement runs one DML statement multiple times, with different values in the VALUES and WHERE clauses.
  • The different values come from existing, populated collections or host arrays. The FORALL statement is usually much faster than an equivalent FOR LOOP statement.

So if you insert N rows using a row-by-row FOR LOOP, it’ll execute the INSERT statement N times. If you go for bulk DML with FORALL instead, it’ll bind the same INSERT statement to N values. This reduces the overhead of context switching between SQL and PL/SQL.

Lets start with one scenario to understand it in practically.

we are taking below tables:



CREATE TABLE IDENTITY_DET(PAN VARCHAR2(10) PRIMARY KEY, NAME VARCHAR2(100),NATIONALITY VARCHAR2(100) DEFAULT 'INDIA');


CREATE TABLE PEOPLE_DET(PAN VARCHAR2(10), NAME VARCHAR2(100),WORK_STATUS VARCHAR2(1) DEFAULT 'Y');


SQL> SELECT * FROM PEOPLE_Det;
PAN              NAME          WORK_STATUS
----------------- ------------- -----------
AMZPV1234F        VENKY         Y
AMZPV1234F        VENKY         Y
APZPV1234F        CHINNI        N
BPZPV1234F        SATISH        N
CPZPV1234F        NIKITH        Y
CPZPV1234F        NIKITH        Y
DPZPV1234F        RAJU           Y
6 rows selected
SQL> select * from Identity_Det;
PAN   NAME     NATIONALITY
------- ----  --------------
"PEOPLE_Det" having data with duplicate PAN VALUES and "PAN" doesn't have unique constraint.

we are going to load data from people_det to identity_det table but here "PAN" column having constraint so it will not allow duplicate and it will throw error.

so we need to handle error also in this BULK DML operation with FORALL.


DECLARE
  TYPE TY_TB_V_PEOPLE_DET IS TABLE OF PEOPLE_DET%ROWTYPE;
  V_TY_TB_V_PEOPLE_DET TY_TB_V_PEOPLE_DET;
  l_error_count        NUMBER;
  IDX                  INTEGER := 0;
  CURSOR CUR_People_Det IS
    SELECT * FROM People_Det;
  bulk_dml_exp EXCEPTION;
  PRAGMA EXCEPTION_INIT(bulk_dml_exp, -24381);
BEGIN
  --Initializing Nested table
  V_TY_TB_V_PEOPLE_DET := TY_TB_V_PEOPLE_DET();
  FOR K IN CUR_People_Det LOOP
    --Allocating space Using Extend Method In loop so it will allocate until loop executes
    V_TY_TB_V_PEOPLE_DET.EXTEND;
    IDX := IDX + 1;
    V_TY_TB_V_PEOPLE_DET(IDX).PAN := K.PAN;
    V_TY_TB_V_PEOPLE_DET(IDX).NAME := K.NAME;
  END LOOP;
  DBMS_OUTPUT.PUT_LINE(' Count ' || V_TY_TB_V_PEOPLE_DET.COUNT);

  --Start Bulk DML Operation and handling exceptions using "Save Exceptions"
  BEGIN
    DBMS_OUTPUT.PUT_LINE(' Count before Forall ' || V_TY_TB_V_PEOPLE_DET.COUNT);
    FORALL MY_REC IN 1 .. V_TY_TB_V_PEOPLE_DET.COUNT SAVE EXCEPTIONS
    --Inserting data in bulk using FORALL
    
      INSERT INTO Identity_Det VALUES V_TY_TB_V_PEOPLE_DET (MY_REC);
  EXCEPTION
    WHEN bulk_dml_exp THEN
      l_error_count := SQL%BULK_EXCEPTIONS.COUNT;
      DBMS_OUTPUT.PUT_LINE('Error Count ' || l_error_count);
      for j in 1 .. l_error_count loop
        DBMS_OUTPUT.PUT_LINE('Error Count ' || l_error_count);
        dbms_output.put_line('Error ' || j || ' at Index ' ||
                             SQL%BULK_EXCEPTIONS(j)
                             .ERROR_INDEX || ' ERR MSG ' ||
                             SQLERRM(-SQL%BULK_EXCEPTIONS(J).ERROR_CODE));
      END LOOP;
  END;
EXCEPTION
  WHEN OTHERS THEN
  
    DBMS_OUTPUT.PUT_LINE('Error MSG ' || SQLERRM || ' LINE ' ||
                         DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
END;

Output:

 Count 7
 Count before Forall 7
Error Count 2
Error Count 2
Error 1 at Index 2 ERR MSG ORA-00001: unique constraint (.) violated
Error Count 2
Error 2 at Index 6 ERR MSG ORA-00001: unique constraint (.) violated

PL/SQL procedure successfully completed



Records got inserted and at the same we are handling errors also with SAVE EXCEPTION clause.

Table output:

SQL> select * from Identity_Det;

PAN                           NAME      NATIONALITY
----------------------- --------- -----------
AMZPV1234F              VENKY     
APZPV1234F              CHINNI    
BPZPV1234F              SATISH    
CPZPV1234F              NIKITH    
DPZPV1234F              RAJU 



Whats SAVE EXCEPTIONS

If any error occurs during Bulk DML, the SAVE EXCEPTIONS clause instructs the PL/SQL engine to store this information in a pseudo collection SQL%BULK_EXCEPTIONS and proceed with the next value in the collection.

After iterating through the whole collection, error ORA-24381 is raised in case any errors in the bulk DML.

ORA-24381: error(s) in array DML

ORA-24381 can be trapped in an EXCEPTION handler, and the contents of SQL%BULK_EXCEPTIONS can be extracted in a FOR LOOP.

Each value in the pseudo collection can be referenced using:

SQL%BULK_EXCEPTIONS(i).ERROR_INDEX: the iteration number in the FORALL statement during which the exception was raised.


SQL%BULK_EXCEPTIONS(i).ERROR_CODE: the Oracle error code of the exception.

PRAGMA EXCEPTION_INIT Scenario

PRAGMA EXCEPTION_INIT will associates exception name along with error number.


Pragma: Its a Keyword and tells that "Statement will Compile time"

EXCEPTION_INIT: It will take 2 arguments

PRAGMA EXCEPTION_INIT(exception_name, -error_number);

We can take any error number(-negative) along with proper exception name for scenario.

Take below table for one scenario:

create table Exp_Test(Id integer);

SQL> select * from exp_test;

 ID
---
 -1
  2
  3
 -2
  6

eg:


DECLARE
  Invalid_Num Exception;
  Pragma Exception_Init(Invalid_Num, -100);
BEGIN
  FOR K IN (SELECT ID FROM EXP_TEST) LOOP
    BEGIN
      IF SIGN(K.ID) = -1 THEN
        RAISE Invalid_Num;
      ELSE
        DBMS_OUTPUT.PUT_LINE(' Id is ' || k.id);
      END IF;
    EXCEPTION
      WHEN Invalid_Num THEN
        DBMS_OUTPUT.PUT_LINE(' Id in Exception ' || k.id);
    END;  
  END LOOP;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE(' Err msg  is ' || SQLERRM);

END;

Output:
 Id in Exception -1
 Id is 2
 Id is 3
 Id in Exception -2
 Id is 6


PL/SQL procedure successfully completed

Wednesday, 20 June 2018

ORA-30926: unable to get a stable set of rows in the source tables


Usually we will get "ORA-30926: unable to get a stable set of rows in the source tables" error when we are working with MERGE statement, especially when we use same table for both target and source tables.

SQL> merge into merge_test mt
  2  using merge_test mt1
  3  on( mt.npa= mt1.npa)
  4  when matched then
  5    update set mt.m_end  = mt1.m_end
  6    delete  where mt.m_start= mt1.m_end
  7  ;

merge into merge_test mt
using merge_test mt1
on( mt.npa= mt1.npa)
when matched then
  update set mt.m_end  = mt1.m_end
  delete  where mt.m_start= mt1.m_end

ORA-30926: unable to get a stable set of rows in the source tables

we are getting this error as its trying to fetch records from same row(it means same rowid) and to differentiate we need to use rowid as below.


SQL> 
  2  merge into merge_test mt
  3  using merge_test mt1
  4  on( mt.npa= mt1.npa and mt.rowid <> mt1.rowid)
  5  when matched then
  6    update set mt.m_end  = mt1.m_end
  7    delete  where mt.m_start= mt1.m_end
  8  ;


Done

Working with Merge Statement


What is the use of Merge Statement: Whenever we want to perform insert,update,delete based on specific condition in single statement then we will use Merge statement.


It selects data from one or more source tables and insert/update/delete will be done in target table.

MERGE INTO target_table 
USING source_table 
ON search_condition
    WHEN MATCHED THEN
        UPDATE SET col1 = value1, col2 = value2,...
        WHERE <update_condition>
        DELETE WHERE <delete_condition>
    WHEN NOT MATCHED THEN
        INSERT (col1,col2,...)
        values(value1,value2,...)

        WHERE <insert_condition>;

Here Target_table is whichever table we want to insert/update/delete.
and source table is which table data we are going to use for insert/update/delete operation and also to specify search condition.

Based on search condition --> if it true then update and delete operation will be done and if its false then insert operation will be done.

Eg 1:
create table EMPLOYEE_DET
(
  EMPNO   NUMBER(4),
  ENAME   VARCHAR2(10),
  DEPTNO  NUMBER(2),
  ADDRESS VARCHAR2(200)

);

create table HR_RECORDS
(
  EMPNO   INTEGER,
  ENAME   VARCHAR2(10),
  DEPTNO  NUMBER(2),
  ADDRESS VARCHAR2(200)
)


Requirement is update address column in emp table from hr_records table and insert in target if any new employee details exist in hr table.


SQL> SELECT * FROM employee_det;

EMPNO ENAME      DEPTNO ADDRESS
----- ---------- ------ -------------------
  101 VENKY          10      Hyderabad
  102 TEJA             10      CHENNAI
  103 SATISH         20     HYDERABAD

SQL> SELECT * FROM HR_RECORDS;

  EMPNO ENAME         DEPTNO ADDRESS
------- ---------- ------ ----------------
    101 VENKY1          10    BANGALORE
    102 TEJA              10    NCR
    103 SATISH          20    DELHI
    104 NIKITH          30    HYDERABAD
    105 RAJU             30    CHENNAI



Above one is our tables data , now we are going to finish our requirement using below MERGE statement.

SQL> MERGE INTO EMPLOYEE_DET ed
  2  USING HR_RECORDS hr
  3  on (hr.empno= ed.empno)
  4  WHEN MATCHED THEN
  5    UPDATE SET ED.ADDRESS = HR.ADDRESS
  6  WHEN NOT MATCHED THEN
  7    INSERT (EMPNO ,ENAME, DEPTNO ,ADDRESS)
  8    VALUES (HR.EMPNO ,HR.ENAME, HR.DEPTNO ,HR.ADDRESS);

Done

SQL> SELECT * FROM EMPLOYEE_DET;

 EMPNO ENAME      DEPTNO ADDRESS
----- ---------- ------ ---------
  101   VENKY          10 BANGALORE
  102   TEJA             10 NCR
  103   SATISH         20 DELHI
  105   RAJU             30 CHENNAI
  104   NIKITH          30 HYDERABAD

In single merge statement , address and new records got updated in employee_det table.



  SQL> create table merge_test(npa integer, nxx integer,m_start varchar2(20),m_end varchar2(20));

Table created

SQL> insert into merge_test values( 800,123,'0000','0004');

1 row inserted

SQL> insert into merge_test values( 800,123,'0004','9999');

1 row inserted

SQL> insert into merge_test values( 900,124,'0000','0005');

1 row inserted

SQL> insert into merge_test values( 900,124,'0005','0099');

1 row inserted

SQL> select * from merge_test;

NPA        NXX     M_START              M_END
--- ----------     -------------------- ---------
800        123     0000                 0004
800        123     0004                 9999
900        124     0000                 0005

900        124     0005                 0099

Lets say we have above data and our requirement is whenever NPA and NXX has started with new m_start value then we need to take that record's m_end value and update it in first record and delete that new record.

Sample output as below:

NPA        NXX     M_START              M_END
--- ----------     -------------------- ---------
800        123     0000                 9999

900        124     0000                 0099


SQL> MERGE INTO MERGE_TEST MT
  2  USING MERGE_TEST MT1
  3  ON( MT.NPA= MT1.NPA AND MT.ROWID <> MT1.ROWID)
  4  WHEN MATCHED THEN
  5    UPDATE SET MT.M_END  = MT1.M_END
  6    DELETE  WHERE MT.M_START= MT1.M_END
  7  ;
Done
SQL> select * from merge_test;
 NPA                           NXX M_START              M_END
--------------------------------------- -------------------------
800                               123    0000                 9999
900                               124    0000                 0099

as you observed ,we are checking condition on NPA column and rowid as well ,we took ROWID to avoid ora-30326 error.

click here to check what's the use of rowid in merge statement.

and if condition is matched we are updating and deleting records.



Tuesday, 19 June 2018

Working With Trace File and TKPROF

Lets start with basics.


What is Trace File:

Its a dump file which will created by Oracle database to help us to diagnose and resolve any performance related issues.


Each server and background process  (eg: check point , log writer etc) writes to trace file,When a process detects an internal error, it writes information about the error to its trace file.


and it will have extension as ".trc".



Trace file wont be in human readable format so we use TKPROF to convert into text file to read.


Query to identify trace file, Click Here


What is TKProf:

TKProf is an Oracle database utility used to format SQL Trace output into human readable format. The TKProf executable is located in the ORACLE HOME/bin directory.

File format of trace file:


The file name format of a trace file is sid_processname_unixpid.trc, 


SID is Instance System identifier,

Process name is three or four-character abbreviated process name identifying the Oracle Database process that generated the file (for example: pmon, dbwr, ora, or reco).
unixpid is process Id number.


eg: venky_ora_9964.trc


Here venky is instance name, ora is process name, 9964 is process Id




It will create trace file, We can check trace file using below query.




SQL> SELECT s.sid,
  2         s.serial#,
  3         pa.value || '/' || LOWER(SYS_CONTEXT('userenv', 'instance_name')) ||
  4         '_ora_' || p.spid || '.trc' AS trace_file
  5    FROM v$session s, v$process p, v$parameter pa
  6   WHERE pa.name = 'user_dump_dest'
  7     AND s.paddr = p.addr
  8     AND s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID');
  SID    SERIAL# TRACE_FILE
---------- ---------- --------------------------------------------------------------------
  6      121   c:\app\venky\diag\rdbms\venky\venky\trace/venky_ora_9964.trc



C:\Users\Venky>tkprof C:\app\venky\diag\rdbms\venky\venky\trace\venky_ora_9964.trc my_tkprof_file.txt explain=scott/tiger waits=yes


TKPROF: Release 11.2.0.1.0 - Development on Tue Jun 19 15:16:58 2018



Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.





Using tkprof utility we are converting trace file to human readable format as above.



The resulting output file contains the following type of information for all SQL statements processed.

****************************************************************
count     = number of times OCI procedure was executed
cpu        = cpu time in seconds executing 
elapsed  = elapsed time in seconds executing
disk       = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows      = number of rows processed by the fetch or execute call

ORA-02030: can only select from fixed tables/views

I'm granting SELECT privilege to SCOTT schema , however we faced following issue.

"ORA-02030: can only select from fixed tables/views"




Fix:

Error is trying to say that , we can give "Select "  privilege only on tables/views.

So lets check here.


We can see here its Synonym, lets find out base table for this Synonym.



Original table name is "V_$PARAMETER" So lets give permission on same.




Query to Identify Trace File

In 11g:

Identification of trace files is simpler in Oracle 11g due to the introduction of diagnostic views.


SELECT *
FROM   v$diag_info

WHERE  name = 'Default Trace File';

Prior to 11g versions:

SELECT s.sid,
       s.serial#,
       pa.value || '/' || LOWER(SYS_CONTEXT('userenv', 'instance_name')) ||
       '_ora_' || p.spid || '.trc' AS trace_file
  FROM v$session s, v$process p, v$parameter pa
 WHERE pa.name = 'user_dump_dest'
   AND s.paddr = p.addr

   AND s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID');


Identify for specific session:

SELECT p.tracefile
FROM   v$session s
       JOIN v$process p ON s.paddr = p.addr
WHERE  s.sid = 309;

we can get SID from below query:

select * from v$session where schemaname='PHPMETREG' AND STATUS='ACTIVE'


From 12.2 version we can use below query:
SELECT *
FROM   v$diag_trace_file

Monday, 18 June 2018

Conventional and Direct Path Loads in SQL Loader OR General INSERT Statement

SQL*Loader provides two methods for loading data:

Note: These 2 methods are applicable for normal insert into statement as well.
  • conventional path load
  • direct path load
Direct path loads can be significantly faster than conventional path loads.



Database Writes on SQL*Loader Direct Path and Conventional Path




In above pic, we can observe that direct path is bypassing "SQL Command Processing" Step. It means it will directly write data to data blocks.

A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources, so it can usually load data at near disk speed. 





Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. This method is used by all Oracle tools and applications.

When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed.



Direct Path basically wont re use free space in the table(eg: space which available after deleting records) and it wont write data to Database buffer cache and data will  directly go to data files and appends after existing data.

But in Conventional Path , It re used free space and first it goes to buffer cache and then it goes to datafiles ans appends wherever free space left.


By default SQL Loader will take Conventional path so to Run sql loader in direct path load, we need to set "DIRECT=TRUE"  in command prompt while running sql loader or you can set it in par file if you're using parameter file.

Tuesday, 12 June 2018

Working With SQL Loader

Let's begin with basics on SQL Loader topic.

SQL Loader: Its a utility which is used for loading data from operating system level to Database. Here data can be stored in any kind of text file.

Using SQL Loader , we can load data into multiple table in same load session and also we can add filters and sql functions according to our requirement.

SQL loader reads and process data based on instructions from control file.


SQL*Loader Overview


Control File: Its a text file and this will tells SQL loader on how data need to be processed and where data is located and how to parse it and on which table we need to insert and many more.

Bad File: It will contain records which are rejected while loading. here reason could be any validation issue.

If you don't specify bad file and if there are any rejected records exists then automatically SQL Loader creates one bad file.It will have the same name as the data file, with .bad extension.

Log File: It will be created automatically when you run sql loader with same name as which you mentioned for control file with extension .log otherwise you can keep your own log name. 

Discard File:

It will have records which are not loaded due when condition we specify in control file.

To know methods in loading data, Click Here


You can see how sql loader processed including any errors while processing loader.

creating  table using emp table which we usually have in our schema.

SQL> create table emp_loader as select * from emp where 1=2;

Table created


Remember that we MUST have insert and delete privileges on whichever table you're trying to load.

eg: now here we are going to load on "emp_loader" table so we need privileges which mentioned above.


SQL> grant insert,delete on emp_loader to <Schema_Name>;

Grant succeeded

Assume we have csv file with following data to insert into "emp_loader" table.

101,VENKY,IT,105,05022018,2500,,50
102,VENKATESH,IT,105,05052018,3200,,50

and assume we have named this file "t_emp_file.CSV" and its stored in following path "C:\Users\Venky\Desktop\Loader\t_emp_file.CSV"

Lets start with basic commands .

Initially we need to create control file.

Scenario 1:
lets say we created control file with below content and control file name is "loader.ctl".

LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  into table emp_loader
  fields terminated by ","
  (empno,ename,job,mgr,hiredate date "ddmmyyyy",sal,comm,deptno)

Juz keep a note on hiredate format in control file, this format match with csv file data.

Go to command prompt and type sqlldr to see list of options available for loader.

Syntax for running loader:

>sqlldr username/password@database name control=Control file path


Loading text file data using SQL Loader:



as you can see i'm at "C:\Users\Venky" path so my log file will be generated by default in this path as i didnt mention specific path in control file.

and also you can observe from screenshot ,2 records got read from csv file so verify from log file whether its loaded or not. If its loaded then query to verify.




so we had successfully loaded data from csv file to database.

Scenario 2:

How to append data ? or How to load data into existed table which has data?

Simple solution for this is , use APPEND command in control file as below.


Usually we get "SQL*Loader-601: For INSERT option, table must be empty" if already existed table contains records.

Solution:

New Control file:

 LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  append
  into table emp_loader
  fields terminated by ","
  (empno,ename,job,mgr,hiredate date "ddmmyyyy",sal,comm,deptno)


just before into clause we are keeping "append" .

now lets run with csv file and its should have 4 records.

Loading text file data using SQL Loader:







Scenario 3:
 How to use functions in SQL loader control file.

Eg: now i need to make it lower case while inserting or check if job is "IT" then insert as "Info_Tech" otherwise "In_Bench".

Solution:

new csv file:

103,Teja,IT,101,05MAY2018,2500,,50

104,Nikith,,101,10FEB2018,3200,,50

new control file:

 LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  append
  into table emp_loader
  fields terminated by ","
  (empno,
   ename "lower(:ename)",
   job "decode(:job,'IT','Info_Tech','In_Bench')",
   mgr,
   hiredate date "ddmonyyyy",sal,comm,deptno)

Loading text file data using SQL Loader:






as you can see for new records "job" field having data as per our requirement and for ename column also in lower case so this is how we can use multiple functions as per our requirement in control file.


Scenario 4:

How to load data from multiple files ?.

one csv has following data:

file name: t_emp_file.CSV

105,Chinnu,IT,104,05MAY2018,2500,,50


another csv has following data:

file name: t_emp_file_1.CSV

106,Raju,IT,104,05MAY2016,5500,,50

Now look at Control  file :
 below is control file content , as you see can see there are 2 file names mentioned to load data.


 LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file_1.CSV'  
  append
  into table emp_loader
  fields terminated by ","
  (empno,
   ename "lower(:ename)",
   job "decode(:job,'IT','Info_Tech','In_Bench')",
   mgr,
   hiredate date "ddmonyyyy",

   sal,comm,deptno)


Loading text file data using SQL Loader:






we can see that data is loaded from 2 csv files in above screenshot.


Scenario 5:

How to load data into multiple tables ?.

lets create emp_personal table as below.

SQL> create table emp_personal(emp_id integer,dob date,native varchar2(100));

Table created

and also we will consider our previous table emp_loader here.

Note: Remember when we want to load into multiple table then we have to use "POSITION" keyword in control file to identify data for respective columns.

lets see control file here.

 LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  append
  into table emp_loader
  (empno position(1:3),
   ename position(5:11),
   job position(12:13),
   mgr position(15:17),
   hiredate  position(19:27),
   sal position(29:32),comm position(34),deptno position(35:36))
   
  into table emp_personal
  
  (emp_id position(1:3),
   dob position(38:46),

   native position(48:56))

csv file content:


105 Chinnu IT 104 05MAY2018 2500  50 05FEB1989 BANGALORE

Loading text file data using SQL Loader:





Data is loaded into 2 tables from single csv file.




Scenario 6:


How to load fixed length data in csv file ?.

In some scenario's clients will provide CSV files without any delimiter but with fixed length for each field in text file.

eg: First 3 characters are empid and next 7 characters are emp name and next 2 characters are JOB etc.


csv file content

105Chinnu IT10405MAY20182500 20 50

106LikithaIT10205APR20183500030050

Control file 

 LOAD DATA
  INFILE 'C:\Users\Venky\Desktop\Loader\t_emp_file.CSV'
  append
  into table emp_loader
  (empno position(1:3),
   ename position(4:10),
   job position(11:12),
   mgr position(13:15),
   hiredate  position(16:24),
   sal position(25:29),
   comm position(30:32),

   deptno position(33:34))


as you observed we have text file without delimiter so we have POSITION keyword with particular length in Control file.

Loading text file data using SQL Loader:






data is loaded as per length we have specified in control file.


Scenario 7:


Can we change data while loading in database ?

for this scenario , lets create new table and one sequence as below.


SQL> create table loader_change_data(Id integer,name varchar2(20),DOB date,Country varchar2(20),Graduation_GPA number(3,2),Time varchar2(6));

Table created

SQL> create sequence seq_id start with 1 maxvalue 10000 nocache;

Sequence created

text file content:

venky       05021989789

Venkatesh   05021991916

and file name is "changing_data.txt"


Control File :

LOAD DATA
INFILE 'C:\Users\Venky\Desktop\Loader\changing_data.txt'
APPEND
INTO TABLE loader_change_data
(Id "seq_id.NEXTVAL",
name POSITION (1:12) "UPPER(:name)",
DOB POSITION (13:20) "TO_DATE(:DOB,'DDMMYYYY')",
Country CONSTANT 'INDIA',
Graduation_GPA POSITION (21:23) ":Graduation_GPA/100",
Time "TO_CHAR(SYSDATE,'HH24:MI')"

)

we are loading data into "loader_change_data" and we are using POSITION keyword here as we got fixed length data in CSV file.

you can observer above control file, as we used sequence for Id Column , upper , to_date , to_char , Constant and also to taking decimal values for "Graduation_GPA".

Loading text file data using SQL Loader:

C:\Users\Venky>sqlldr phpmetreg/phpmetreg@jpac12c control=C:\Users\Venky\Desktop\Loader\loader_data_change.ctl log=C:\Users\Venky\Desktop\Loader\changing_data.LOG

SQL*Loader: Release 11.2.0.1.0 - Production on Fri Jun 15 16:04:07 2018

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 1

Commit point reached - logical record count 2





Scenario 8:


How to add conditions while loading data and how to handle null values in text file ?

We will take above table and with same csv file and scenario is , wherever name is "venky" , that data should not be loaded.

so for this scenario, control file would be as follows with "WHEN " clause.

venky       05021989789

Venkatesh   05021991916

LOAD DATA
INFILE 'C:\Users\Venky\Desktop\Loader\changing_data.txt'
APPEND
INTO TABLE loader_change_data
WHEN name<>'venky'
(Id "seq_id.NEXTVAL",
name POSITION (1:12) "UPPER(:name)",
DOB POSITION (13:20) "TO_DATE(:DOB,'DDMMYYYY')",
Country CONSTANT 'INDIA',
Graduation_GPA POSITION (21:23) ":Graduation_GPA/100",
Time "TO_CHAR(SYSDATE,'HH24:MI')"

)

Loading text file data using SQL Loader:

C:\Users\Venky>sqlldr phpmetreg/phpmetreg@jpac12c control=C:\Users\Venky\Desktop\Loader\loader_data_change.ctl log=C:\Users\Venky\Desktop\Loader\changing_data.LOG

SQL*Loader: Release 11.2.0.1.0 - Production on Fri Jun 15 17:28:03 2018

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 2



Scenario 9:


how to handle null values in text file ?

we can fix this issue by using "TRAILING NULLCOLS" in control file.

CSV file content:

:venky:05021989:789:
:Venkatesh:05021991:916:


We have 6 columns in table but above csv file has only four ":" to differentiate fields instead of Five, reason is we have constant value for one column in control file so we dont need it from csv file.

you may think for id column also we are taking from sequence and dob is from sysdate so we need to reduce 2 more ':' in csv file but here actual logic is its applicable only for CONSTANT.

Control File:

LOAD DATA
INFILE 'C:\Users\Venky\Desktop\Loader\changing_data.txt'
APPEND
INTO TABLE loader_change_data
fields terminated by ':'
TRAILING NULLCOLS
(Id "seq_id.NEXTVAL",
name  "UPPER(:name)",
DOB  "TO_DATE(:DOB,'DDMMYYYY')",
Country CONSTANT 'INDIA',
Graduation_GPA ":Graduation_GPA/100",
Time "TO_CHAR(SYSDATE,'HH24:MI')"

)

Loading text file data using SQL Loader:


C:\Users\Venky>sqlldr phpmetreg/phpmetreg@jpac12c control=C:\Users\Venky\Desktop\Loader\loader_data_change.ctl log=C:\Users\Venky\Desktop\Loader\changing_data.LOG

SQL*Loader: Release 11.2.0.1.0 - Production on Fri Jun 15 17:43:10 2018

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 1

Commit point reached - logical record count 2





Scenario 10:

Whats parfile and how to use it.

This file is useful to keep all parameter which we pass in sqlldr command.

eg:

we created below "loader.par" parameter file with following content.

userid=scott/tiger@venkydb
control=C:\Users\Venky\Desktop\Loader\loader_data_change.ctl 

log=C:\Users\Venky\Desktop\Loader\changing_data.LOG


We are going to use above control file and csv file and only change is while loading data as below.

Loading text file data using SQL Loader:

C:\Users\Venky>sqlldr parfile=C:\Users\Venky\Desktop\Loader\loader.par

SQL*Loader: Release 11.2.0.1.0 - Production on Fri Jun 15 18:10:33 2018

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Commit point reached - logical record count 2