Learning Objectives: In this module of Informatica Training, we will talk about the different Informatica Products available along with understanding the overview of Informatica PowerCenter Product, Architecture, Terminology, Tools GUI, Mappings, Transformations, Sessions, Workflows, and Workflow Monitor. We will also discuss the installation of Informatica in this module.
Topics:
  • Informatica & Informatica Product Suite
  • Informatica PowerCenter as ETL Tool
  • Informatica PowerCenter Architecture
  • Component-based development techniques
Learning Objectives: In this module of Informatica Certification Training, we will discuss different Data Integration Concepts. You will be introduced to the concepts – Data Profiling, Data Quality Management, ETL and ETL architecture, and Data Warehousing.
Topics:
  • Data Integration Concepts
  • Data Profile and Data Quality Management
  • ETL and ETL architecture
  • Brief on Data Warehousing
Learning Objectives: In this module of Informatica Certification Training, you will learn about different development units of PowerCenter and how they can be used to create a simple mapping and transformations.
Topics:
  • Visualize PowerCenter client tools
  • Data Flow
  • Create & execute mapping
  • Transformations & their usage
Learning Objectives: In this module of Informatica Certification Training, you will learn about development units of Informatica PowerCenter (workflow manager & monitor) and how are they are used to create different tasks and workflow.
Topics:
  • PowerCenter Workflow Manager
  • Flow within a Workflow
  • Reusability & Scheduling in Workflow Manager
  • Components of Workflow Monitor
  • Workflow Task and job handling
Learning Objectives: Through this module of Informatica Certification Training, you will learn about Advanced Transformation techniques which will equip you to deal with advanced concepts of PowerCenter transformations like Java, XML etc. We will also discuss various Error Handling, Transaction Processing & Reusability features available in Informatica.
Topics:
  • Advanced transformations (Java, SQL, Normalizer)
  • XML File Processing & Transaction Processing
  • Error handling features
  • Cleaning the data (Advanced Functions, Regular Expressions)
  • Reusability features
Learning Objectives: In this module of Informatica Training, we will discuss advanced ETL scenarios using Informatica. You will be introduced to the concepts – Error Handling, Partitioning, PushDown Optimization, Incremental Aggregation, Constraint Based Loading, Target Load Order Group, CDC (Change Data Capture) etc. We will also discuss creating Data Warehouses related processes like SCD, Fact and Dimension Loading.
Topics:
  • Changed Data Capture, Incremental Aggregation, Constraint Based Loading etc.
  • Advanced techniques – Flat File, XML File
  • Loading Dimensional Tables (SCD-1, SCD-2, SCD-3)

When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high.

Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer’s needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc.

When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved.

Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server’s database repository.

There can be any number of repositories in informatica but eventually it depends on number of ports.

Partitioning a session means solo implementation sequences within the session. It’s main purpose is to improve server’s operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel.

For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session’s workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level.

A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session’s manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner.

One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch.

When values change during the session’s execution it’s called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session.

Learning Objectives: In this module of Informatica Training, you will understand the features provided by Informatica to debug, troubleshoot & handle error to understand the inner workings and responsibilities of the operational role. Various PowerCenter recovery options and for tasks and workflows will be discussed along with the recommended best practices to the operations process.
Topics:
  • PowerCenter Error
  • Basic troubleshooting methodology
  • Debugger
  • Workflow and Session logs to diagnose errors
  • Connection Errors & Network errors
  • Recovery scenarios & mechanisms
  • Configure workflow & sessions for recovery
  • High Availability
  • PowerCenter Environment
Learning Objectives: This module of Informatica Certification Training, highlights the performance aspects of Informatica PowerCenter components and the efficient use of them. We will also discuss about the best practices suggested by Informatica for optimum performance of your ETL process.
 
Topics:
  • Performance Tuning Methodology
  • Mapping design tips & tricks
  • Caching & Memory Optimization
  • Partition & Pushdown Optimization
  • Design Principles & Best Practices
Learning Objectives: This module of Informatica Training will introduce you to the usage and components of PowerCenter repository manager. You will also learn to migrate and manage the repository effectively.
 Topics:
  • Repository Manager tool (functionalities, create and delete, migrate components)
  • PowerCenter Repository Maintenance
Learning Objectives: This module of Informatica Certification Training, will give you an overview of the PowerCenter administration console, recognize and explain integration and repository service properties. We will also discuss about the command line utilities and use them to manage the domain and repository, start and control workflows.
Topics:
  • Features of PowerCenter 10
  • Overview of the PowerCenter Administration Console
  • Integration and repository service properties
  • Services in the Administration Console (services, handle locks)
  • Users and groups
Learning Objectives: This module of Informatica Certification Course deals with command line part of Informatica PowerCenter. You will learn to automate the tasks from the command line as well.
Topics:
  • Infacmd, infasetup, pmcmd, pmrep
  • Automate tasks via command-line programs
Learning Objectives: This module of Informatica Certification Training will give you the detailed coverage of the architectural aspects of Informatica as a whole and PowerCenter in particular.
Topics:
  • Informatica 10 Architecture
  • Application services
  • Buffer memory
  • Connectivity among the tolls
  • Connection Pooling

Following are the features of complex mapping.

  • Difficult requirements
  • Many numbers of transformations
  • Complex business logic

One can find whether the session is correct or not without connecting the session is with the help of debugging option.

Yes, One can do because reusable transformation does not contain any mapplet or mapping.

Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required.

Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations.

The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions.

Repositoryreports are established by metadata reporter. There is no need of SQL or other transformation since it is a web app.

The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations.

When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible.

When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn’t take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping.