0% found this document useful (0 votes)
46 views2 pages

Punitha CV-2

Red

Uploaded by

mepunithavalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views2 pages

Punitha CV-2

Red

Uploaded by

mepunithavalli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

PUNITHAVALLI GUNALAN

@ [email protected]

O 7200623140

, Chennai
i https://www.linkedin.com/in/punitha-gunalan-787738305/

 Objective
Highly skilled Azure Data Engineer with 4 years of experience in managing cloud-based
infrastructure. Accustomed to working closely with system architects, software architects and
design analysts to understand business or industry requirements to develop comprehensive data
models. Proficient at modeling, design and implementation stages.

 Technical Expertise
Expertise in Design, Implement and Manage Complex Data Storage and Processing Solutions.

Experience in deployment, scaling, and management of data solutions, leveraging the power
of the cloud.

Experience in data modeling and data migrating such as SQL database to Azure data lake
analytics, Azure SQL Database to Azure synapse analytics, AWS S3 to ADLS Gen 2 and so on.

Controlling and granting database access and migrating.

Ability to design and construct end-to-end data pipelines, efficiently moving and processing
data from diverse sources to storage and analytical destinations, enabling actionable insights.

Develop Pipelines In ADF That Extract, Transform, And Load Data From Sources Such As
Azure SQL, Blob Storage And Azure SQL Data Warehouse

Experience in setting up and managing CI/CD pipelines.

Created Spark Applications Using Pyspark And Spark-SQL For Data Extraction,
Transformation, And Aggregation From Multiple File Formats

Data Ingestion Techniques, Such As Bulk and Incremental Loading, As Well As Experience
With Data Transformation Using Azure Data Factory.

Proficiency in Programming Languages Such As Python and SQL querying, enabling efficient
data processing and analysis.

Hands On Experience In Data Warehousing Techniques Such As Data Cleansing, Surrogate


Key Assignment, Slowly Changing Dimensions SCD TYPE 1, SCD TYPE 2

Troubleshoot and resolve data processing and storage issues.

Experience Managing Data Engineering Projects, Including Requirements Gathering, Design,


Development, Testing, And Deployment.

 Skills
Cloud Platform - Microsoft Azure ADF, Blob Storage, Data Lake, Data bricks

RDBMS -SQL Server, Assure SQL

Data Integration - Azure Data Factory

Languages - SQL,Python
Data Warehouse -Azure Synapse

Bug Tracking Tool -JIRA

ETL development,Data Modeling,Data Pipeline Design,Data Migration


 Projects
To develop end-to-end data pipeline to ingest, transform, analyze, and visualize historic data from
the on premises to cloud. Initially, data is ingested into an Azure Data Lake Gen2 storage account
using Azure Data Factory pipelines. Following ingestion, the data is processed and curated using
Databricks Notebooks. Azure Synapse Analytics is then deployed to create a SQL database from
the curated data.
Technologies Used :
Azure Storage,Azure Data Factory,Azure Synapse,Azure Databricks
Roles and Responsibilities:
Created Pipelines using Azure Data Factory to ingest data from different sources to the lake.

Validated the data from various sources and further process it by using Azure Data factory ,
Azure Data flow.

Write the code to merge the daily delta thereby allowing us to do incremental load of Data.

Storing it in SQL DB and further consuming the data to generate Dashboards.

Design and maintain data models in conjunction with the data engineering team to ensure
consistency and integrity of the data.

Create and keep up with documentation for database schemas, data models, and SQL code.

Handling the errors incurred and debugging the issues based on script logs and session logs,
Using stored procedures for extracting data.

Scheduled pipelines running according to the timings, Using parameters and variables loading
values into pipelines according to the requirement.

Developing logic apps to trigger email notification whenever the pipeline got failed using web
activity.

Create, maintain, and improve databases, schema objects, SQL queries, stored procedures,
indexes, functions and views for data migration and ad-hoc reporting.

Involved in performance tuning of pipelines in Azure Data factory.

 Experience
Pulesoft Technologies
Mar 2020 - Present
Data Engineer

Skifter Technologies
Sep 2018 - Jan 2020
Software Tester

 Education
JJCET
Electronics and communication engineering

You might also like