Sunday 27 November 2011

UNIT TESTING IN INFORMATICA

Unit testing can be broadly classified into 2 categories.
Quantitative Testing
Validate your Source and Target
a) Ensure that your connectors are configured properly.
b) If you are using flat file make sure have enough read/write permission on the file share.
c) You need to document all the connector information.
Analyze the Load Time
a) Execute the session and review the session statistics.
b) Check the Read and Write counters. How long it takes to perform the load.
c) Use the session and workflow logs to capture the load statistics.
d) You need to document all the load timing information.
Analyze the success rows and rejections.
a) Have customized SQL queries to check the source/targets and here we will perform the Record Count Verification.
b) Analyze the rejections and build a process to handle those rejections. This requires a clear business requirement from the business on how to handle the data rejections. Do we need to reload or reject and inform etc? Discussions are required and appropriate process must be developed.
Performance Improvement
a) Network Performance
b) Session Performance
c) Database Performance
d) Analyze and if required define the Informatica and DB partitioning requirements.
Qualitative Testing
Analyze & validate your transformation business rules. More of functional testing.
e) You need review field by field from source to target and ensure that the required transformation logic is applied.
f) If you are making changes to existing mappings make use of the data lineage feature Available with Informatica Power Center. This will help you to find the consequences of Altering or deleting a port from existing mapping.
g) Ensure that appropriate dimension lookup’s have been used and your development is in Sync with your business requirements.

NOTE:
for more information Please refer INFORMATICA REALTIME UNIT TESTING in the same BLOG(how to integrate more than once source)

 
  1. UNIT TEST CASE FOR LOAN_MASRER
FUNCTIONALITY_ID
FIELD_NAME
DETAIL
VALUE PASSED
EXPECTED RESULT
ACTUAL RESULT
PASS/FAIL RESULT
REMARK
STG_SCHM_DTLS_001
LOAN
_ID
_TYPE_ID SHOULD BE NOT NULL ,FIRST CHARACHER ALPHABET(INSCH) AND LAST 10 CHARACTER NUMERIC VALUES AND ALSO ITS LENGTH IS 16
INSCH00000000002
ACCEPT RECORD
RECORD ACCEPTED
PASS

STG_SCHM_DTLS_002
LOAN_TYPE_ID
REJECT WHEN , NOT NULL ,FIRST 5 CHARACHER NOT (INSCH) OR LAST 10 CHARACTER NON NUMERIC VALUES AND ALSO ITS LENGTH <>16
INSCP001000000002
REJECT RECORDRECORD REJECTED
PASS
RECORD INSERTED INTO REJECTED FILE WITH AN ERROR_ID &ERROR_DETAILS INTO ERROR_TABLE

STG_SCHM_DTLS_003
LOAN_COMPANY_ID
LOAN_COMPANY_ID  MUST BE NOT NULL,FIRST 4 CHRACTER ALPHABET(INCO) AND LAST 11 CHRACTER NUMERIC VALUES AND ALSO LENGTH IS 15
INCO00000000003
ACCEPT RECORD
RECORD ACCEPTED
PASS

STG_SCHM_DTLS_004
LOAN_COMPANY_ID
REJECT WHEN , NOT NULL ,FIRST 4 CHARACHER NOT (INCO) OR LAST 11 CHARACTER NON NUMERIC VALUES AND ALSO ITS LENGTH <>15
INSO00000060003
REJECT RECORD
RECORD REJECTED
PASS
RECORD INSERTED INTO REJECTED FILE WITH AN ERROR_ID &ERROR_DETAILS INTO ERROR_TABLE
STG_SCHM_DTLS_005
START_DATE
         START DATE SHOULD  BE  A VALID DATE
12/9/1988
ACCEPT RECORD
RECORD ACCEPTED
PASS

STG_SCHM_DTLS_006
START_DATE 
START DATE SHOULD NOT BE LOADED WHEN IT IS NOT A VALID DATE      
33FeB/88
REJECT RECORD
RECORD REJECTED
PASS
RECORD INSERTED INTO REJECTED FILE WITH AN ERROR_ID &ERROR_DETAILS INTO ERROR_TABLE
STG_SCHM_DTLS_007
SCHEME_DESC
SCHEME-DESC SHOULD BE ALPHABETIC TYPE
AUTOMOBILE
ACCEPT RECORD
RECORD ACCEPTED
PASS

STG_SCHM_DTLS_008
SCHEME_DESC         
REJECT WHEN SCHEME DISCOUNT IS NOT ALPHABETIC TYPE
MOTO124
REJECT RECORD
RECORD REJECTED
PASS
RECORD INSERTED INTO REJECTED FILE WITH AN ERROR_ID &ERROR_DETAILS INTO ERROR_TABLE
STG_SCHM_DTLS_009
PREMIUM_PER_LACS    
PREMIUM_PER_LACSSHOULD BE NUMERIC    
5000
ACCEPT RECORD
RECORD ACCEPTED
PASS


Differene Between ETL AND ELT

INFORMATICA 9.1 NEW FEATURES

Session log file rollover: You can limit the size of session logs for real-time sessions. You can limit the size by time or by file size. You can also limit the number of log files for a session.
Licensing

Enforcement of licensing restrictions. PowerCenter will enforce the licensing restrictions on the number of CPUs and repositories.
Lookup Transformation :
Cache updates. You can update the lookup cache based on the results of an expression. When an expression is true, you can add to or update the lookup cache. You can update the dynamic lookup cache with the results of an expression.

Database deadlock resilience: In previous releases, when the Integration Service encountered a database deadlock during a lookup, the session failed. Effective in 9.0, the session will not fail. When a deadlock occurs, the Integration Service attempts to run the last statement in a lookup. You can configure the number of retry attempts and time period between attempts.

Multiple rows return: You can configure the Lookup transformation to return all rows that match a lookup condition. A Lookup transformation is an active transformation when it can return more than one row for any given input row.

SQL overrides for uncached lookups:
In previous versions you could create a SQL override for cached lookups only. You can create an SQL override for uncached lookup. You can include lookup ports in the SQL query.


Mapping Architect for Visio :
New mapping objects: You can include the following objects in a mapping template: Pipeline Normalizer transformation Custom transformation PowerExchange source definition PowerExchange target definition
You can also create a mapping template from a mapping that contains these objects.

Shortcuts. You can configure a transformation to use a shortcut. You can create a mapping template from a mapping that contains shortcuts to reusable transformations.

SQL Transformation :

Auto-commit for connections. You can enable auto-commit for each database connection. Each SQL statement in a query defines a transaction. A commit occurs when the SQL statement completes or the next statement is executed, whichever comes first. Exactly-once processing. The Integration Service provides exactly-once delivery of real-time source messages to the SQL transformation. If there is an interruption in processing, the Integration Service can recover without requiring the message to be sent again. To perform exactly-once processing, the Integration Service stores a set of operations for a checkpoint in the PM_REC_STATE table. Passive transformation. You can configure the SQL transformation to run in passive mode instead of active mode. When the SQL transformation runs in passive mode, the SQL transformation returns one output row for each input row.

XML Transformation :

XML Parser buffer validation. The XML Parser transformation can validate an XML document against a schema. The XML Parser transformation routes invalid XML to an error port. When the XML is not valid, the XML Parser transformation routes the XML and the error messages to a separate output group that you can connect to a target.
Starting from Version 9 , powercenter admin console is called as informatica administrator.

Informatica Administrator (PowerCenter Administration Console) Effective in version 9.0, the PowerCenter Administration Console is renamed to Informatica Administrator. The Informatica Administrator has a new interface. Some of the properties and configuration tasks from the PowerCenter Administration Console have been moved to different locations in Informatica Administrator. The Informatica Administrator is expanded to include new services and objects.

Analyst Service. Application service that runs Informatica Analyst in the Informatica domain. Create and enable an Analyst Service on the Domain tab of Informatica Administrator. When you enable the Analyst Service, the Service Manager starts Informatica Analyst. You can open Informatica Analyst from Informatica Administrator.

Data Integration Service. Application service that processes requests from Informatica Analyst and Informatica Developer to preview or run data profiles and mappings. It also generates data previews for SQL data services and runs SQL queries against the virtual views in an SQL data service. Create and enable a Data Integration Service on the Domain tab of Informatica Administrator. Model Repository Service. Application service that manages the Model repository. The Model repository is a relational database that stores the metadata for projects created in Informatica Analyst and Informatica Designer. The Model repository also stores run-time and configuration information for applications deployed to a Data Integration Service. Create and enable a Model Repository Service on the Domain tab of Informatica Administrator. PowerExchange Listener Service. Manages the PowerExchange Listener for bulk data movement and change data capture. The PowerCenter Integration Service connects to the PowerExchange Listener through the Listener Service. PowerExchange Logger Service. Manages the PowerExchange Logger for Linux, UNIX, and Windows to capture change data and write it to the PowerExchange Logger Log files. Change data can originate from DB2 recovery logs, Oracle redo logs, a Microsoft SQL Server distribution database, or data sources on an i5/OS or z/OS system. Connection management. Database connections are centralized in the domain. You can create and view database connections in Informatica Administrator, Informatica Developer, or Informatica Analyst. Create, view, edit, and grant permissions on database connections in Informatica Administrator. Deployment. You can deploy, enable, and configure deployment units in the Informatica Administrator. Deploy Deployment units to one or more Data Integration Services. Create deployment units in Informatica Developer. Monitoring. You can monitor profile jobs, scorecard jobs, preview jobs, mapping jobs, and SQL Data Services for each Data Integration Service. View the status of each monitored object on the Monitoring tab of Informatica Administrator.

Novalidate in Oracle

Child entry in Oracle without Refference
=========================================

Yes, this is possible in Oracle as well PostgreSQL.

Adv:-
------
1. Doesn't Lock the table.So, no outage.
2. It doesn't validate the child entries with parent.So, no performance issues.
3. This option is very useful, when you are handling very big tables.

Step 1:
----------
CREATE TABLE PARENT(T INT PRIMARY KEY);

Step 2:
----------
Insert sample data

SQL> SELECT * FROM PARENT;

T
----------
1
2
3

Step 3:
----------
CREATE TABLE CHILD(T INT);

SQL> SELECT * FROM CHILD;

T
----------
1
2
3
4
5

Step 4:
----------
SQL> ALTER TABLE CHILD ADD FOREIGN KEY(T) REFERENCES PARENT(T) ENABLE NOVALIDATE;

Step 5:
----------
SQL> SELECT * FROM CHILD;

T
----------
1
2
3
4
5

SQL> INSERT INTO CHILD VALUES(10);
INSERT INTO CHILD VALUES(10)
*
ERROR at line 1:
ORA-02291: integrity constraint (SYS.SYS_C006394) violated - parent key not
found

Commit function Works in PL SQL Functions/Triggers

How Commit Works in PL SQL Functions/Triggers ::

COMMIT in Functions/Triggers.
+++++++++++++++++++

It Triggers/functions may be led to bulk associated transactions. This may lead to transaction atomicity failure.

Please find the below test case from Oracle.

Step 1:
---------

SQL> CREATE TABLE COMMIT_TRIG_TEST_TABLE1(T INT);

Table created.

SQL> CREATE TABLE COMMIT_TRIG_TEST_TABLE2(T INT,CHECK (T NOT IN(0)));

Table created.

Step 2:
---------
CREATE OR REPLACE FUNCTION INSERT_DATA(NUM IN NUMBER)
RETURN VARCHAR2 IS
BEGIN
EXECUTE IMMEDIATE 'INSERT INTO COMMIT_TRIG_TEST_TABLE1 VALUES('||NUM||')';
RETURN 'INSERT OK';
END;

Step 3:
---------
CREATE OR REPLACE TRIGGER COMMIT_TRIG
BEFORE INSERT ON COMMIT_TRIG_TEST_TABLE1
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO COMMIT_TRIG_TEST_TABLE2 VALUES(:NEW.T);
END;

Step 4:
--------
However, we know that if the function having dml operations, then we can't make that function as a part of SQL statement. So, please use the below method for inserting

the data.

SQL> VAR V VARCHAR2(10);
SQL> EXEC :V :=INSERT_DATA(10);

PL/SQL procedure successfully completed.

SQL> PRINT V;

V
------------------
INSERT OK

Please observer this statement in INSERT_DATA function .

This insert statement is associated with two transactions.

1. By itself.

2.INSERT INTO COMMIT_TRIG_TEST_TABLE2 VALUES:

InTrigger Body Asper Atomocity(ACID), whether the transaction must Commit/Rollback.


+INSERT ON TABLE1 (Success)+ }

==> TRANSACTION is sucess.

+INSERT ON TABLE2 (Success)+ }


SQL> SELECT * FROM COMMIT_TRIG_TEST_TABLE1;

T
----------
10

SQL> SELECT * FROM COMMIT_TRIG_TEST_TABLE2;

T
----------
10


That's why this EXEC :V :=INSERT_DATA(10) is sucessfull.

Step 5
--------

SQL> DELETE FROM COMMIT_TRIG_TEST_TABLE1;

SQL> DELETE FROM COMMIT_TRIG_TEST_TABLE2;

SQL> exec :v :=insert_data(0);
BEGIN :v :=insert_data(0); END;
*
ERROR at line 1:
ORA-02290: check constraint (SCOTT.SYS_C009672) violated
ORA-06512: at "SCOTT.COMMIT_TRIG", line 3
ORA-04088: error during execution of trigger 'SCOTT.COMMIT_TRIG'
ORA-06512: at "SCOTT.INSERT_DATA", line 4
ORA-06512: at line 1

Here the transactions are as below

+INSERT ON TABLE1 (Success)+}

==> TRANSACTION is failure.

+INSERT ON TABLE2 (Failure)+}

As per Automocity rule, whether the transaction must be sucess/fail. However, here the transaction is having both.

That's why the total transaction is rolled back. So, INSERT ON TABLE1 also rolled back here.

SQL> SELECT * FROM COMMIT_TRIG_TEST_TABLE1;

no rows selected

SQL> SELECT * FROM COMMIT_TRIG_TEST_TABLE2;

no rows selected

Even though if you have autonomous_transactions, those must and should statisfy this automoctiy.


Now, how to insert element '0' in first table.

Step 6
--------

CREATE OR REPLACE FUNCTION INSERT_DATA(NUM IN NUMBER)
RETURN VARCHAR2 IS
pragma autonomous_transaction;
BEGIN
INSERT INTO COMMIT_TRIG_TEST_TABLE1 VALUES(NUM);
COMMIT;
RETURN 'INSERT OK';
END;

Need to make the trigger always commits.Even though it fails, it needs to commit for satisfying the automocity.

CREATE OR REPLACE TRIGGER COMMIT_TRIG
BEFORE INSERT ON COMMIT_TRIG_TEST_TABLE1
FOR EACH ROW
DECLARE
pragma autonomous_transaction;
BEGIN
INSERT INTO COMMIT_TRIG_TEST_TABLE2 VALUES(:NEW.T);
commit;
exception when others then
COMMIT;
END;

SQL> select insert_data(0) from dual;

INSERT_DATA(0)
--------------------------------------------------------------------------------
INSERT OK


SQL> select * from commit_trig_test_table1;

T
----------
0

SQL> select * from commit_trig_test_table2;

no rows selected

Saturday 12 November 2011

ORACALE PIPELINED CONCEPTS

These are used to return more than one value from a table.

Ex:-

Step 1:
-----------
CREATE TABLE EMP(DEPTNO INT,DNAME VARCHAR2(10),LOC VARCHAR2(10));

Step 2:
-----------
Insert sample data.

Step 3:
-----------
Create an object for the row type casting.
CREATE OR REPLACE TYPE dept_row
AS OBJECT
(
  deptno INT,
dname VARCHAR2(20),
  loc VARCHAR2(20)
);

Step 4:
-----------
Create a Return Type for the pipelined function.
CREATE OR REPLACE TYPE dept_table_type
AS TABLE OF dept_row;

Step 5:
-----------
CREATE OR REPLACE FUNCTION dept_pipe_function RETURN dept_table_type PIPELINED IS
BEGIN
FOR rec in (select * from dept)
LOOP
PIPE ROW (dept_row(rec.deptno,rec.dname,rec.loc));
END LOOP;
RETURN;
END;


Step 6:
----------
SQL> select * from table(dept_pipe_function);




SQL> select *from table(dept_pipe_function);

    DEPTNO DNAME                LOC
---------- -------------------- --------------------
        10 ACCOUNTING           NEW YORK
        20 RESEARCH             DALLAS
        30 SALES                CHICAGO
        40 OPERATIONS           BOSTON