0% found this document useful (0 votes)
94 views4 pages

Dice Resume CV Vishnuteja Kuruguntla

Vishnu Teja is a Senior Informatica Developer with 10 years of IT DWBI experience, specializing in ETL processes, data integration, and data migration for major companies. He has extensive expertise in Informatica tools, database management, and cloud platforms, along with a strong background in Agile methodologies and production support. His professional experience includes roles at Cigna, Northern Trust, Thomson Reuters, and Hitachi Solutions, where he has successfully implemented various data management projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views4 pages

Dice Resume CV Vishnuteja Kuruguntla

Vishnu Teja is a Senior Informatica Developer with 10 years of IT DWBI experience, specializing in ETL processes, data integration, and data migration for major companies. He has extensive expertise in Informatica tools, database management, and cloud platforms, along with a strong background in Agile methodologies and production support. His professional experience includes roles at Cigna, Northern Trust, Thomson Reuters, and Hitachi Solutions, where he has successfully implemented various data management projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

SR INFORMATICA DEVELOPER

Vishnu Teja
Phone: +1 (614)-398-3610
Email: [email protected]

PROFESSIONAL SUMMARY:

 I have 10 years of IT DWBI experience in requirement analysis, designing, development, administration of ETL,
data profiling, data integration and data migration with global majors like Principal Financial Group,
Lumen Technologies,Wells Fargo and Bank of America.

 Having around 9 years of experience in conceptualizing and delivering Data warehouse projects and having
experience in full life cycle of Software Development (SDLC) including business requirement gathering & Analysis,
System study, Application Design, Development, Testing, Implementation, System Maintenance, Support and
Documentation.
 Enterprise Data Warehousing and Data mart: Experience and Expertise in ETL process (Informatica Power Center
10.x/9.x/8.x/7.x), Experience in using Informatica Quality 10.1/9.6.1, Informatica Developer 10.1/9.6.1.
 Experience and Expertise in ETL process (Informatica PowerCenter) Mappings, Mapplets, Transformations, Workflow
Manager and Workflow Monitor. Also, Experience in Extraction / Transformation / Loading of legacy data to Data
Warehouse using Informatica Developer 10.1.
 Experienced in implementing standardization and cleansing activities using IDQ transformations.
 Experienced in profiling data sources and creating scorecards to monitor the data quality.
 Extensive knowledge in RDBMS concepts and Relational and Dimensional Data modeling.
 Expertise in implementing the HTTP, XML, Web Services Transformations along with SFDC Sales force methodology.
 Worked on Informatica Cloud Services, Informatica Intelligent Cloud Services, Salesforce.com, and Salesforce
transformations.
 Experience in Informatica Administration.
 Experience in designing the Batch mechanism, Audit Balancing and Counting frame works, partitioning the jobs at
ETL level.
 Expertise in identifying the Source and Target bottle necks and implementing the Performance tuning techniques at
mapping, session, and database level.
 Proficiency in using Informatica Power Center tool to design Data Integration, Data Conversions, Data Migration
from Multiple Source systems.
 Extensively worked on Databases like Oracle 12c/11g/10g/9i/8i, Netezza, Teradata, DB2 SQL Scripting and worked
on large database systems.
 Having good knowledge on writing the PL/SQL Scripts like Procedures, Packages and Functions.
 Having basic knowledge on Informatica BDM.
 Extensively involved in writing Unix Shell Scripting programs like Source file validations, Files upload/Download
to/from remote server, Source/Target Files Archival/purge requirements and to run ETL batch jobs from UNIX using
PMCMD command.
 Testing skills include performing Unit, Regression, Integration, and Volume testing as well as Development,
Execution, and Maintenance of Test plans, Test specifications and Test scenarios.
 Experience in providing Application support engagements and On Call (24*7) Support for Production Problem
Resolution (Resolve Production issues and problems in an expedited manner).
 Excellent cross-functional communicator, working as interface between business/clients with an ability to see the
end-to-end project functions while staying on top of the details.

TECHNICAL SKILLS:

O/S Windows NT/XP/98/2000, Unix, Linux


ETL & Data Quality Tools Informatica PowerCenter 10.4/10.2.1/9.6/9.0.1/8.6/8.1, Informatica Data Quality
10.1/9.6.1, Informatica Developer 10.1/9.6.1, Informatica Analyst 10.1/9.6.1,
Informatica Power Exchange 8.6, Informatica Cloud, ICRT, Secure Agent, Power
center Web Services, Informatica9.6.1/9.0.1 Administration
Cloud Platforms Informatica Cloud Secure Agent, Salesforce.com, Web Services, REST API
Database, Languages & Oracle 12c,11g,10g/9i/8i, Sql, Pl/Sql, SQL Server R2/2008/2012, MS-Access,
Tools Netezza, DB2, Teradata, My SQL, Hive.
Tools and Utilities IBM Data Studio, SQL Server, Toad, SQL Developer, Teradata SQL Assistant,
Putty, PVCS, Clear case, HP Quality Center11, GitHub, Postman, Work Bench,
AQT, My SQL, Hadoop.
Methodologies: Star Schema, Snowflake Schema
Reporting Tools Business Objects XIR2, MicroStrategy, Web Focus, Cognos

PROFESSIONAL EXPERIENCE:

Cigna | Bloomfield, Connecticut Sept 2022 – Present


Senior Informatica Developer

Responsibilities:
 Part of the SCRUM team in implementing the Customer program teams (CPT) application as an Agile implementation
with 2-week sprints, which involves planning, tasking, executing and successful delivery of a component with
business value.
 Designed ETL process, load strategy and the requirements specification after having the requirements from the end
users.
 Developing several complex mappings in Informatica a variety of PowerCenter transformations, Mapping
Parameters, Mapping Variables, Mapplets & Parameter files in Mapping Designer using Informatica PowerCenter.
 Implemented Change Data Capture (CDC) using Power Exchange.
 Involved in creation of High-Level Design (HLD) and Low-Level Design (LLD) document based on Business
Requirement (BR)/Functional Requirement (FR) and got those reviewed by customers along with detailed project
timelines.
 Responsible for Impact Analysis, upstream/downstream impacts.
 Extensively used ETL and Informatica to load data from Hadoop, DB2 and flat files into the target Oracle 12c
database.
 Used most of the transformations such as the Source Qualifier, Expression, Aggregator, Filter, Connected and
Unconnected Lookups, Joiner, Update strategy, Transaction Control, XML Parser, HTTP and Stored procedure.
 Created mappings which involved Slowly Changing Dimensions (SCD) Type 1 and Type 2 to implement business logic
and capturing the deleted records in the source systems.
 Implemented REST API calls using HTTP (POST & GET methods) to push the PJM data to DR HUB to reduce the
manual business interventions.
 Involved in Sqoop jobs creation to connect between Oracle database tables and Hadoop.
 Tuned the performance of mappings by following Informatica best practices and applied several methods to get best
performance by decreasing the run time of workflows.
 Involved in data design and modeling, system study, design, and development.
 Migration of code between the Environments and maintaining the code backups.
 Experience in production support, ETL executions and resolving prod issues.
 Worked on Business support request (BSR) raised by business users for critical/high priority issues.
 Worked on IICS migration and upgrade.
 Involved in doing error handling, debugging, and troubleshooting Sessions using the Session logs, Debugger and
Workflow Monitor.
 Prepared the data flow of the entire Informatica objects created, to accomplish testing at various levels unit,
performance, and Integration.
 Identify and analyze data discrepancies and data quality issues and work to ensure data consistency and integrity.
 Performed audit on ETL and validated source data Vs target table loads.
 Maintaining the Release related documents by configuration management techniques.
 Ran the workflows on a daily, weekly, monthly basis using AutoSys Batch Scheduling tool.
 Performance tuning on sources, targets mappings and SQL (Optimization) tuning.

Environment

Informatica Power Center 10.4, IICS, Hadoop, Oracle 12c, DB2, AutoSys, GitHub, Quality Center,

Cognos, Shell Scripting, Jenkins, AQT, Sqoop, Hive, ServiceNow.

Northern Trust | Chicago, Illinois Oct 2020 – Aug


2022
Data Engineer

Responsibilities:
 Part of the SCRUM team in implementing the Customer Data Management (CDM) application as an Agile
implementation with 2-week sprints, which involves planning, tasking, executing and successful delivery of a
component with business value.
 Interfaced with the business to understand the business logic and developed the implementation approach with the
business and the technical leads.
 Customer information (Accounts, Contacts, Emails, Phones, Addresses) was migrated from in-house applications to
Salesforce Sales Cloud and will be the system of record once all Sales and Contracts-related applications become
operational on Salesforce.
 Managed the Salesforce orgs to create Objects and Users and grant permissions on the objects.
 Implemented the Mesh architecture (MASA) in integrating the data between applications.
 Informatica is used as a primary ETL tool to create several mappings to read the data from the Salesforce
application, and to load them into the staging tables.
 Used various Informatica transformations like Expression, Sorter, Union, Joiner, Lookup, HTTP, Java, Aggregator,
Application Source Qualifier, Normalizer, Salesforce Lookup, SQL, Filter, Router, Sequence, XML Parser to
incorporate the logic as part of the mappings.
 Business exceptions were raised as part of the Informatica mappings to log them into the Informatica based error
tables, for validation purposes and reprocessing.
 Sessions and other tasks (Assignment, Control, Email) were created to be a part of the workflow
 Made use of the real-time option with the Salesforce connector to read the data from Salesforce org to load the
data into staging environment, which reduced the processing time for an agreement from 10 mins to around 2 mins.
 Implemented Web Service calls using HTTP (POST & GET methods) to push the data to EKR (Enterprise Key Registry)
to maintain the linkage between the id’s across different application systems.
 Widely used Informatica Parameters and Variables at the Mapping, Session, and Workflow level to maintain
commonality and simplify the maintenance of the applications.
 Coordinated with the Informatica administrators to migrate the code across environments.
 Worked with the testers within the scrum team to resolve any defects that arose as part of their integration and
regression testing.
 Active Batch Jobs and Active Batch Global Variables were created and used extensively to automate the job
execution process, that would seamlessly run the jobs from an end to end.
 Emails were sent out to the selected group notifying on the successful/failed messages of Integration jobs.
 Extensively used Informatica Client Tools - Informatica Designer, Workflow Manager, Workflow Monitor and
Repository Manager.
 Involved in development of mappings. Sophisticated transformation rules are implementing using Informatica
features like Aggregator, Filter, Expression, HTTP, Lookup, Update strategy and Source Qualifier.
 Involved in creating the JSON file format to load the Key registry entries into My SQL database.
 Created test plans, test data for extraction and transformation processes and resolved data issues following the data
standards.
 Responsible for utilizing technological analysis and design principles to formulate detailed application plans and
processes to implement clients' requests for new or modified functionalities.
 Prepared low-level technical design document and participated in build/review of the ETL jobs.
 Optimize the performance of existing functionality.
 Monitor performance and functionality throughout the implementation process by testing applications to ensure
optimum user benefits, design and configure application modifications and enhancements, as necessary.
 Provided support during the system test, Product Integration Testing and UAT.

Environment

Informatica Power Center 10.2.2, IICS, Oracle 12c, Salesforce, Unix, Active Batch, My SQL, JIRA, Confluence, Postman,
SQL Developer.

Thomson Reuters | Eagan, MN April 2018 – Aug 2020


Data Consultant

Responsibilities:
 Involved in functional requirements and convert to technical specifications.
 Experience in design and configuration of landing tables, staging tables, base objects, hierarchies, foreign-key
relationships, lookups, query groups, queries/custom queries, and packages.
 Involved in create Source - Target mapping documents.
 Extensively used Informatica Client Tools - Informatica Designer, Workflow Manager, Workflow Monitor and
Repository Manager.
 Involved in development of mappings. Sophisticated transformation rules are implementing using Informatica
features like Aggregator, Filter, Expression, Sequence Generator, Lookup and Source Qualifier.
 Created test plans, test data for extraction and transformation processes and resolved data issues following the data
standards.
 Created detailed mapping documents and Detailed Technical design documents for Source systems to
Warehouse/Mart.
 Responsible for utilizing technological analysis and design principles to formulate detailed application plans and
processes to implement clients' requests for new or modified functionalities.
 Prepared low-level technical design document and participated in build/review of the BTEQ Scripts, Fast Exports,
Multiloads and Fast Load scripts.
 Have used BTEQ, FLOAD, MLOAD Teradata utilities to export and load data to/from Flat files.
 Optimize the performance of existing functionality.
 Monitor performance and functionality throughout the implementation process by testing applications to ensure
optimum user benefits, design and configure application modifications and enhancements as necessary.
 Developed the mappings, applied rules and transformation logics as per the source and target system requirements.
 Provided support during the system test, Product Integration Testing and UAT.
 Designed and developed Informatica ETL Interfaces to load data incrementally from Teradata database and flat files
into staging schema.
 Automated the ETL process through ESP scheduling.
 Created and monitored sessions using Informatica Workflow manager and workflow monitor.
 Extensively worked on UNIX Shell scripts to launch jobs and to execute Job Control table process.
 Coordinated with DBA team in code deployments.
 Involved in Informatica code migration using GitHub and UCD tools.

Environment

Informatica Power Center 10.2/9.6.1, Oracle 12g, Teradata, Unix, Perl, ESP, GitHub, MS access, HP Quality Center11.

Hitachi Solutions India Pvt. Ltd., India July 2015 – February 2018
Data Engineer

Responsibilities:
 Involved in designing different components of system like big-data event processing framework Spark, distributed
messaging system Kafka and SQL database PostgreSQL.
 Implemented Spark Streaming and Spark SQL using Data Frames.
 Developing Spark scripts, UDFS using both Spark DSL and Spark SQL query for data aggregation, querying, and
writing data back into RDBMS through Sqoop.
 Written Spark applications using Scala to interact with the PostgreSQL database using Spark SQL Context and
accessed Hive tables using Hive Context.
 Created functions and assigned roles in AWS Lambda to run python scripts, and AWS Lambda using java ,
Maven,Spring MVCto perform event-driven processing.
 Automated the existing scripts for performance calculations using scheduling tools like Airflow.
 Performing ETL testing activities like running the Jobs, Extracting the data using necessary queries from database
transform, and upload into the Data warehouse servers.
 Developed and deployed data pipeline in cloud such as AWS.
 Build machine learning models to showcase big data capabilities using Pyspark and MLlib.
 Designed tables and columns in Redshift for data distribution across data nodes in the cluster keeping columnar
database design considerations
 Create, modify and execute DDL in table AWS Redshift and snowflake tables to load data
 Create programs using NIFI workflows for various data ingestion into Hadoop Data Lake from MySQL, Postgres with
Couchbase.
 Generated report on predictive analytics using Python and Tableau including visualizing model performance and
prediction results.
 Performed Data Analysis, Data Migration, Data Cleansing, Data Modeling, Transformation, Integration, Data Import,
and Data Export through Python.
 Integrated Apache Storm with Kafka to perform web analytics.
 Managed large datasets using Panda data frames and MySQL
 Monitor Resources and Applications using AWS Cloud Watch, including creating alarms to monitor metrics such as
EBS, EC2, ELB, RDS, S3, EMR, IAM, Athena, Glue, SNS and configured notifications for the alarms generated based on
events defined.
 Used Jenkins for CI/CD, Docker as a container tool and Git as a version control tool.
 Utilized Agile and Scrum methodology for team and project management.
 Involved in loading data from Linux file system to HDFS.
 Monitor System health and logs and respond accordingly to any warning or failure conditions.

You might also like