Friday, September 23, 2011

Use of nls_date_language parameter in Oracle

There are some problems where your theoritical knowledge is not at all important. You need to go through such situation. Read an example below..

You have an application whose users are all over the world. You are processing some files every day i.e. each day a new files comes in and you process that using the application available. Even in that, every day you handle one set of file - say for an example on Monday you handle file A, on Tuesday you handle file B, on Wednesday you handle file C etc.

Now issue here we faced was, at India or Germany or France for a particular day files which was getting picked up by the application was not same as application was picking file at US. We checked the date at both end and it was same. So what was fishing? please read below...
Now question is, how will you identify the day of a week - yaa simple SQL Query

 select to_char(sysdate,'D') from dual;

Looks pretty simple and result is quite obvious, it should be ???

We normally counts our week start day as Monday and Week end day as Sunday. so if today is 7-Jan-2009 which is wednesday, the above query will or should return 3 i.e. thid day of a week.

But this does not ends here...Please go through below queries

I have checked the nls_territory parameter value of my db, which results into AMERICA.

SQL> connect
Enter user-name: sys as sysdba
Enter password:
Connected.
SQL> show parameter nls_territory;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
nls_territory                        string      AMERICA
SQL>
SQL> select to_char(sysdate,'D') from dual;

T
-
4


Above query gave me the result as 4, so I varified the sysdate.

SQL>  select to_char(sysdate, 'dd-mon-yyyy HH:MI:SS AM') from dual;
TO_CHAR(SYSDATE,'DD-MON'
-----------------------
07-jan-2009 01:26:14 AM


SQL> select to_char(sysdate,'D') from dual;
T
-
4

SQL>

SQL> select to_char(sysdate, 'Day') from dual;
TO_CHAR(S
---------
Wednesday

SQL>

So, it showed me that, its a 7-Jan-2009, Wednesday and Week Day is 4

Now, I changed my parameter - nls_territory - value to FRANCE

SQL> ALTER SESSION SET NLS_TERRITORY = France;
Session altered.

I again verify my above values

SQL> select to_char(sysdate,'D') from dual;
T
-
3


SQL> select sysdate from dual;
SYSDATE
--------
07/01/09


SQL> select to_char(sysdate, 'dd-mon-yyyy HH:MI:SS') from dual;
TO_CHAR(SYSDATE,'DD-'
--------------------
07-jan-2009 01:22:06


SQL> select to_char(sysdate, 'Day') from dual;
TO_CHAR(S
---------
Wednesday


What does it means?

For 7-Jan-2009, wednesday, if US is a territory Day of a week is 4 but if FRANCE as a territory Day of a week is 3; mmm quite weired, isnt it?


We can try it out using nls_date_language..


SQL> alter session set nls_date_language=GERMAN;
Session altered.

SQL> select to_char(sysdate, 'Day') from dual;
TO_CHAR(SY
----------
Mittwoch


SQL> alter session set nls_date_language=AMERICA;
ERROR:
ORA-12705: Cannot access NLS data files or invalid environment specified


SQL> alter session set nls_date_language=AMERICAN;
Session altered.

SQL> select to_char(sysdate, 'Day') from dual;
TO_CHAR(S
---------
Wednesday

SQL>


This means that, even if we try using Day instead of D, results are different in both cases i.e at US and at Germany.

Try using below now...

SQL> alter session set nls_date_language = GERMAN;
Session altered.

Solution to this problem is, when you are calculating such values always use below to_char function

SQL> select to_char(sysdate, 'Day', 'nls_date_language = GERMAN') from dual;
TO_CHAR(SY
----------
Mittwoch


SQL> select to_char(sysdate, 'Day', 'nls_date_language = AMERICAN') from dual;
TO_CHAR(S
---------
Wednesday

SQL>

Yesss, this works,

So, the thumb rule is, where ever you are located, if you are working with date and timezones are different, be specific in forming a query and use nls_date_language of your query same globally. as mentioned below.

We do not know, why week day at US starts from Sunday and rest of the world from Monday.



Logic behind Number to Date conversion at Excel....



Number of time we have encountered, that some times our number are getting showed in date format and we simply change the format and read it as number as required.


Today while going through the code, I realized the need to understand the logic behind it.


Our source files comes in Excel format, wherein Date column (say as Purchase Date) comes in various formats e.g. 1-Jan-2011, 01-01-2011, 01-January-2011, 01/01/2011, 1-Jan-11 and 40544. All formats are readable except last one 40544.


Now question for me was, how can I convert this 40544 to date? How will my code understand that this is a number and what way should I convert this? Fortunately logic to decode this was available in other mapping.


How do the do it?
They first take out the length of a string (its a date but comes as a string port), first validates all the mentioned formats mentioned above (except number format) and at last if the nothing is true and if length is less than 6 - not sure why they check for less then 6 date; might be they want to stick only to 21st century - they apply below logic.


If we write 1 as a number of excel cell and convert that to date format it appears as 1/1/1900


i.e. for number 1 it considers as 1-Jan-1900. So if we specify 2 it will be 2-Jan-1900. Logic is we need to add that many number of days to 1-Jan-1900. If number is 40544 means 40543 days after 1-Jan-1900. This addition of 1 is not always true as there might be some leap years comes in and changing the date. If we negate 2, logic works fine


Below query will give the results in oracle --


select to_char(to_date('1-Jan-1900','dd-mon-yyyy') + 40544 - 2,'dd-mon-yyyy')  from dual


Query above will give result as 02-jan-2111, which is mathcing with that of excel sheet.

In informatica the logic is as below

ADD_TO_DATE(TO_DATE('01/01/1900', 'dd/mm/yyyy'),'D',TO_INTEGER(NAME))

Add_to_Date will take 3 inputs,
1. Date in which you want to add
2. What you want to add - D/DD/Day means day. M/MM means month and Y/YY means year. Please see informatica help for further information
3. Last is how many numbers you want to add.

Wednesday, September 21, 2011

XML Source

Today after discussion with my PM , got a question in mind - how its working exactly. Though "Web Services Consumer" tranformation is not getting created at my end because of license issue, I had number of things to check for XML Tranformation

 To start with I have taken XML Source 

To create a XML source, you need to have a XML file. I have taken it from -- http://www.w3schools.com/xml/xml_tree.asp

<?xml version="1.0" encoding="ISO-8859-1"?>
<note>
  <to>Tove</to>
  <from>Jani</from>
  <heading>Reminder</heading>
  <body>Don't forget me this weekend!</body>
</note>

Saved it at Local drive and imported XML Tranformation. While importing it, we get below options
1.You will come across such screen at the time of importing a XML definition. Keep every thing as default.

Sometimes last option "How would you like to generate names for the XML Columns" will be default as first option, select it as second option and check all the mentioned check boxes.

After clicking on OK, you will get a screen asking "How would you like the XML Views to be generated?" as specified below.

Choose any option of Entitiy Relationship and Hierarchy Relationship; structure will be same in this case. Structure may differ for rest of the option or the XML is of more complex nature.

Once you import the tranformation successfully, it will be like this.
Here, structure got pivoted. Note is the main node, that became the primary key of a xml tranformation, rest to, from, heading and body became port of the tranformation i.e. column of a table. So, main outer tag became the row and inner tags became column ultimately.
We will be connecting this XML Source to Target as CSV file.

Below is the structure of a mapping 

This structure is pretty simple, only caution is at the time of connection definition at Session Level. You need to define the XML and Target file path cautiously - please see below
As in the image, we have given value of 2 parameters
Source File Directory = D:\InformaticaFiles\
Source File Name = myXML.xml
By same way you need to specify directory and name for target file.

Once you run the map, target file will have one row inserted as below.







Below is the out-put of target file. Note, while generating the target, create dummy csv file with 5 columns specified with 30 as a each column size.




=================================================================


Now we added 2 more pipelines - so there are not totally 3 pipelines. First will have Normalized XML, second will have De-Normalized and the last one is created with option "Do not generate XML View". We will see regarding how to generate XML Source using "Do not generate XML View option", but if you notice at all 3 source definition, first 2 has one extra port - XPK_X_Note - which is also a primary key. This extra port is missing at 3rd definition which is created with "Do not generate XML View option". We will discuss on how to create XML Source using Do not generate xml view option later.


When we define same target file for all 3 targets, only 1 record get inserted - might be every pipeline will update the same record. So, we defined 3 separate files for each target, first 2 files have unique value for <node> got inserted, third file is missing that particular column.


First port - offcouse for first 2 sources - will generate a unique number for every run. In this case - second pipeline was not exists for first few runs. So, second XML Source count is slightly lacking behind the first source. What we have to find out is, if we dis-connect from repository, still that value persist?

Hmm, dis-connected from repository and after re-connecting it ran the workflow. Concluded values are persistent, might be getting saved at database against workflow run. Not sure how to re-set these values.



One more experiment I did was, I changed the XML Source file now as below


<?xml version="1.0"?>
<note>
    <to>Tove</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>
    <to>staksale</to>
    <from>magogate</from>
    <heading>Reminder</heading>
    <body>Meet tomorrow morning at 10</body>
    <to>Tove</to>
    <from>asjaiwa</from>
    <heading>magogate</heading>
    <body>Meet tomorrow morning at 10</body>
</note>



Now there are totally 3 records inside <node>, but when we run the workflow, it saved only first record of Jani.

This means that, our XML Source is defined to read only below structure



<note>
  <to>Tove</to>
  <from>Jani</from>
  <heading>Reminder</heading>
  <body>Don't forget me this weekend!</body>
</note>


If you want to insert number of records, you need to pass that many xml files separately. That's the reason why people are using XML Generator tranformation. So that you just pass your input values to  it, and it will generate the XML dynamically and forward it ahead.

Now we changed the XML files as below


<?xml version="1.0"?>
<note>
    <to>staksale</to>
    <from>magogate</from>
    <heading>Reminder</heading>
    <body>Meet tomorrow morning at 10</body>
    <to>Tove</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>   
    <to>Tove</to>
    <from>asjaiwa</from>
    <heading>magogate</heading>
    <body>Meet tomorrow morning at 10</body>
</note>



This time, mapping picked up first record which is of staksale. and you could see it got inserted into source files.


if you want to insert all 3 records, please define 3 separate files for every records, and give it as an input.



Monday, September 19, 2011

Changing data types of columns...

Today I came across one good example
You have one table say ABC, and in that you have 4 cols which are having data type as varchar2(200).
Requirement is , you need to convert the data types of these columns to Number. This table has a data.
How will you change the data types of these columns ?
Note:- You can not create backup table in production...
----------------
CREATE TABLE ABC(COL1 VARCHAR2(200),COL2 VARCHAR2(200),COL3 VARCHAR2(200),COL4

SET DEFINE OFF;
Insert into HR.ABC
   (COL1, COL2, COL3, COL4)
 Values
   ('1', '1', '1', '1');
Insert into HR.ABC
   (COL1, COL2, COL3, COL4)
 Values
   ('2', '2', '2', '2');
Insert into HR.ABC
   (COL1, COL2, COL3, COL4)
 Values
   ('3', '3', '3', '3');
COMMIT;


---------------

TABLE ABCADD

UPDATE
(COL11 NUMBER, COL22 NUMBER, COL33 NUMBER, COL44 NUMBER)

---------------

ABCSET COL44 = TO_NUMBER(COL4)

ALTER

UPDATE ABC SET COL33 = TO_NUMBER(COL3)

UPDATE ABC SET COL22 = TO_NUMBER(COL2)


UPDATE ABC SET COL11 = TO_NUMBER(COL1)
----------------

ALTER  TABLE ABC DROP(COL1, COL2, COL3, COL4)
----------------

ALTER TABLE ABC RENAME column COL44 TO COL4

ALTER TABLE ABC RENAME column COL33 TO COL3

ALTER TABLE ABC RENAME column COL22 TO COL2

ALTER TABLE ABC RENAME column COL11 TO COL1


---------------

1. Here, you need to create dummy columns with number datatype at abc, because you can not create column with same name again.
2. You converted data using to_number from varchar to number and updated the newly added columns
3. You dropped the old columns
4. You rename newly added columns
 

Saturday, September 17, 2011

Things to verify...

Lookup Over-ride as clause and port map
Un-connected Lookup Input
Session Variables use
Indirect File Read What if select has more columns that ports available at Lookup and Source Qualifier

XML of mapping, session, workflow
Mapping, Session, Workflow parameter and variable working
What all things we can parameteriazed ?
Interview Question of Records count - use of aggregator and joiner, second answer to earlier question.
Normalizer, want to trasfer 2 colums as 2 rows. But rest all columns are same
Issue of performance related to birwarkar
in what order we need to perform order by at lookup over-ride
auto option of memory...
full outer join of joiner

Unix:
 grep "NK2" RDBJANERMM_BRK2_20110916_L1_20110917013839_risk.csv |  grep "USD" | cut -d"," -f3 | uniq

1) Command for listing only the files with some data =>
 find . -size +0c -ls | grep BFE2

2) Command for listing all the files but sorted according to their Size =>
 ls -lrS | grep BFE2
3) To convert "," to ";" fron whole file
First open the file using => view temp.csv
 %s/,/;/g
4) To display 12th colum from a .csv file, with condition that file "risk.csv" contains "EQUIP:ATFV"
 grep "EQUIP:ATFV" risk.csv | cut -d"," -f12
5) To view only directories
  ls -lrt | grep ^d
6) To delete roes from 3 to 17 in vi editor
 3,17d
7) To delete a row in vi editor
 press d twice with the cursor in the row to be deleted.

Use of a Sorter Tranformation

Suppose you have an requirement wherein, source contains Resource Id, Resource Name and the Mobile Brand name which he uses.

In a target table, you have 2 tables, one is resource_master and other is mobile_master.

In resource_master you have information like resource_id, resource_Name, and mobileBrand_id which he uses.

In mobile_master table you wil have mobileBrand name and mobileBrandId which is a refresesce to mobile_master table.

In Mobile_Master table, you will have unique entry for mobileBrand Name and its ID, when you join mobile_master and resource_master, you should get resource and name of the mobile which he useds.

Below are the db scripts which we will be using.



create
table SourceTable(ResourceId number,ResourceName varchar2(200),MobileBrand
)


Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (1, 'staksale', 'I-Phone');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (2, 'asjaiswa', 'nokia');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (3, 'amjadhav', 'samsung');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (3, 'sgakhar', 'nokia');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (4, 'magogate', 'nokia');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (5, 'pkhvrk', 'LG');
Insert into HR.SOURCETABLE
   (RESOURCEID, RESOURCENAME, MOBILEBRAND)
 Values
   (6, 'venkk', 'LG');
COMMIT;

create table MobileMaster(MobileBrand varchar2(200),MobileBrandId number)create table ResourceMaster(ResourceId number,ResourceName varchar2(200),MobileBrandId number)

Besides are the result which we are expecting after running the mapping.

Below are the challenges
1. Assigning unique id to each mobile brand
2. Passing corresponding mobile brand id to both the tables.




Below is the mapping logic which we have used to populate the records to 2 separate target tables.

1. Here, after fetching the records from source we have first passed it to sorter and sorted them on the basis of mobile brand name.
2. Now once mobile brands are sorted, next step is to assign unique id to each brand name. For that we implemented below logic
To Identify if brand name got changed
V_CHANGE = IIF(MOBILEBRAND = V_OLDMOBILE_BRAND,0,1)
V_OLDMOBILE_BRAND = MOBILEBRAND
To assign Unique Id
OLDVAL = IIF(V_CHANGE=1,NEXTVAL,OLDVAL)
OUTVAL will be passed to both target

Now, V_Chage variable will have value 1 when there is a change in MobileBrand value ; otherwise it will be 0 always.

Note: V_Change and V_OldMobile_Brand port have specified order. If you move the port V_OldMobile_Brand before V_Change then result will be in-correct.

We need to pass new value only in case new mobile brand name appears. That we will come to know from value of v_Change varialbe. So, if V_Change value is 1 we will pass nextval of sequence or else OldVal.

OldVal we will store as "IIF(V_CHANGE=1,NEXTVAL,OLDVAL)"; so if change found at brandName new sequence value will be passed or else same old value.

2. Second challenge is to pass same value to other target table. Now if we simply connect brand name and brand id ports to other target tables, there will be repeated values at other tables.
So, we put aggregator in-between and aggregate the values on the basis of brandname. Now, only 1 value for each brand will get forwarded ahead.

Wednesday, September 7, 2011

How will you rename a table ?

I have created new table say testTable

create table testTable
(
    id number
);

Renamed that to RenameTestTable

rename testTable to RenamedTestTable

When u try to select data from original table, it gives error as table does not exists.

select *
from TestTable

When you select data from renamed table, command gets succeeded.

select *
from RenamedTestTable

Renaming works even if table has the data

drop table RenamedTestTable

create table testTable
(
    id number
);


insert into testTable values(1);

rename testTable to RenamedTestTable


select *
from RenamedTestTable

Now the question is, what happen if table has

1. Index defined on it
2. Table has synonym defined in another schema
3. Table is getting used in Function or Procedure
4. Table is defined in view query.

We will test these scenarios one by one.

1. If table has an index defined on it.....

drop table RenamedTestTable;
create table testTable
(
    id number
);
create unique index IdxTestTable
on testTable(id) ;
select *
from user_ind_columns
where index_name = upper('IdxTestTable');
insert into testTable values(1);
rename testTable to RenamedTestTable;
select *
from user_ind_columns
where index_name = upper('IdxTestTable');
select *
from RenamedTestTable;

So, if index is defined on a table and you are renaming that particular table, nothing will happen to index. At metedata view of index, table name entry  will get change simply.


2. If table has a synonym defined ...

I logged in as sys user/schema and created synonym as below

select *
from hr.testTable

create synonym testTable for hr.testTable

select *
from testTable

select *
from all_synonyms
where synonym_name = upper('testTable')

When I re-named a table at hr schema, and fired select statement at sys; got an error as "ORA-00980: synonym translation is no longer valid".

As synonym is created in sys schema, there is no changed in synonym entry.


3. If table is getting used in a procedure/function ....

I created table and procedure as below

Table Creation :
drop table RenamedTestTable

create table testTable
(
    id number
);

create unique index IdxTestTable
on testTable(id) ;

Procedure Creation :
create procedure TableTest
as

    vCount number;

begin

    select count(*) into vCount
    from testTable;
   
    dbms_output.put_line('Record Count Is ==> ' || vCount);

end;

Note: if ur using toad, just select whole code and press F5, procedure will get compile
Also, note here, procedure and table names are same. It allows because both are different db objects. I might not able to create table or proedure with same name

Running a procedure:
begin

TableTest;

end;

make your dbms_output on at table. If you are using sql *, then type "setserveroutput on".

 You will get 0 as a result.

select *
from all_procedures
where  owner = 'HR'

If you check that proc at all_procedures, amazingly proc name is null and object name it displays as proc name. Not sure how.

select *
from all_objects
where owner = 'HR'
and object_name like 'T%'
If you check that in all_objects, it shows correctly - object type as table different and object type as procedure different. and status of table and procedure is valid.

Now I will rename the table.
rename testTable to RenamedTestTable

fired the below query

select *
from all_objects
where owner = 'HR'
and object_name like 'T%'

Tables vanished from result set and procedures status became InValid.

Just try to run the proc...
begin

TableTest;

end;

Note: to run the proc at toad, just select the above code and do Ctr+Enter

Got an error as below
ORA-06550: line 3, column 1:
PLS-00905: object HR.TABLETEST is invalid
ORA-06550: line 3, column 1:
PL/SQL: Statement ignored

Here, object tabletest is reffered as procedure and not a table, because tabletest table object is no longer exists.


4. If table is getting used in a view ...

drop table RenamedTestTable

create table testTable
(
    id number
);


create unique index IdxTestTable
on testTable(id)

create view testTableView
as
select * from testTable

select *
from all_views
where view_name = upper('testTableView')


select *
from all_objects
where object_name = upper('testTableView')

View as an object is with valid status.

Now, I will rename the table

rename testTable to RenamedTestTable

Now, view became invalid
select *
from all_objects
where object_name = upper('testTableView')

I will try to select from view now
select *
from testTableView;
It gave error as [ORA-04063: view "HR.TESTTABLEVIEW" has errors]

All about CSS

From book HTML & CSS - Design and Build Websites - Jon Duckett CSS works by associating rules with HTML elements. These rules govern how...