UserFile (5)
UserFile (5)
Career Objective:
To give my best in my professional pursuit for overall benefit and growth of the company that I serve by
facing the challenges. I will show my caliber and gain some experience.
Professional Summary:
Having total around 5.5+ years of experience in applications design, development and
Experienced Experience in creating an Azure Key Vault, providing security for Secrets, Keys,
Passwords and Databases and Excel files.
Experience on ETL pipeline implementation using Azure services such as ADF, Pyspark, and Data
bricks.
Implemented architectures using Azure Data platform capabilities like Azure Data Lake, Azure
Data Factory, Azure SQL and Azure SQLDW.
Experience in Developing Spark applications using Spark - SQL/PYSPARK in Databricks for ETL
from multiple file (Parquet, CSV file) formats for analyzing & transforming the data to customer
usage patterns.
Experience on migrating on Premises ETL process to Cloud.
Working experience Pipelines in ADF using Linked Services/Datasets/Pipeline to Extract and load
data from different sources like Azure SQL, ADLS, Blob storage, Azure SQL Data warehouse.
Working with Azure Data Factory Data transformations.
Working with Azure Data Factory Control flow transformations such as For Each, Lookup
Activity, Until Activity, Web Activity, Wait Activity, If Condition Activity.
Good understanding of Spark Architecture including Spark Core, Spark SQL, DataFrames, Driver
Node, Worker Node, Stages, Executors and Tasks.
Scheduling Notebooks using ADF V2
Monitoring / Troubleshooting Spark jobs in production.
Having experience in writing Stored procedure in Databases.
Having extensive knowledge on Joins.
Technologies with Azure Data Factory, Azure Data Lake, Azure Data Bricks, ADLS and SQL server.
Good Experience in ETL, Database design, Data warehousing, Data modelling, Development,
Hands on experience on ADF activities such as Copy, Stored Procedure, Look-up and Data Flow
activities.
Knowledge on Azure Data Factory (ADF).
In ADF I have a good knowledge in creating the pipelines data sets linked services, installing and
configuring integration run time
In Data Bricks, I have a good idea on creating note books clusters and jobs and also good
knowledge on Pyspark, spark SQL with db utilizes commands and widgets.
Handled customers issues through collaboration, resolution, or escalation to provide a great end
to end support experience.
TECHNICAL SKILLS:
Languages SQL
Cloud Technologies Azure
Storages Blob Storages, Data Lake Storage
Database ORACLE SQL DB, SQL DW
Data Processing(ETL) SQL , AZURE Data factory, Data Bricks, Pyspark
Other Cloud Services Logic app, Azure Key Vault
Project: 1
• Created data ingestions to the snowflake data warehouse to the usable layers table from
different sources like Oracle, SFTP based locations by using different stored procedures.
• Maintaining metadata in the tables to migrate from Hive to Snowflake to the usable data layer.
• Writing complex queries and used SQL functions while transforming as per requirements.
• For transformation all the SQL queries to be placed in the blob to execute based on dependency
level.
• Created Linked Services/Datasets/Pipeline/ for different data sources like File System, Data Lake
Gen2
• Implemented the alerts in ADF pipelines to trigger notification when pipeline is failed.
• Extract, Transform and Load data from Sources Systems to Azure Data.
• Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL) and
processing the data.
• Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark data
bricks cluster. Understanding business logic and performing ETL on unstructured data using
pipelines from data Factory.
• Experience on creating scope for Azure Key Vault service from data bricks
• Experiences on creating spark Transformations like merge functionality and window functions
customers and internal teams.
Project: 2
Implemented the alerts in ADF pipelines to trigger notification when pipeline is failed.
Extract, Transform and Load data from Sources Systems to Azure Data.
Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL) and
processing the data in Azure Data bricks.
Develop ADF objects like creating ADF pipelines, parameters, variables, Datasets and Configuring
Integration run time etc
Extract, Transform and Load data from Sources Systems to Azure Data Storage services using a
combination of Azure Data Factory, Spark SQL, Azure Data Lake Analytics.
Project: 3
Project Name : MSP
Client : TuneTalk
• Configuration of Azure Cloud services that includes Azure Blob, Azure SQL Db.
• Scheduling the pipelines based on tumbling window for automation job in ADF pipeline.
• Attending the daily stand up calls, sprint planning and grooming meeting.
Education Summary:
DECLARATION:
I hereby declared that all the information provided above is true according to my knowledge and
belief.