0% found this document useful (0 votes)
8 views

UserFile (5)

Bachu Maheswara Reddy has over 5.5 years of experience in application design and development, specializing in Azure services for ETL processes and data management. He has worked on multiple projects involving Azure Data Factory, Data Bricks, and SQL, focusing on data ingestion, transformation, and migration to cloud platforms. Reddy holds an MBA in Finance and is committed to contributing to the growth of the companies he serves.

Uploaded by

jaswantkorrapati
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

UserFile (5)

Bachu Maheswara Reddy has over 5.5 years of experience in application design and development, specializing in Azure services for ETL processes and data management. He has worked on multiple projects involving Azure Data Factory, Data Bricks, and SQL, focusing on data ingestion, transformation, and migration to cloud platforms. Reddy holds an MBA in Finance and is committed to contributing to the growth of the companies he serves.

Uploaded by

jaswantkorrapati
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Bachu Maheswara Reddy

Mail:[email protected] +91 7901001426


_____________________________________________________________________________________

Career Objective:

To give my best in my professional pursuit for overall benefit and growth of the company that I serve by
facing the challenges. I will show my caliber and gain some experience.

Professional Summary:

 Having total around 5.5+ years of experience in applications design, development and
 Experienced Experience in creating an Azure Key Vault, providing security for Secrets, Keys,
Passwords and Databases and Excel files.
 Experience on ETL pipeline implementation using Azure services such as ADF, Pyspark, and Data
bricks.
 Implemented architectures using Azure Data platform capabilities like Azure Data Lake, Azure
Data Factory, Azure SQL and Azure SQLDW.
 Experience in Developing Spark applications using Spark - SQL/PYSPARK in Databricks for ETL
from multiple file (Parquet, CSV file) formats for analyzing & transforming the data to customer
usage patterns.
 Experience on migrating on Premises ETL process to Cloud.
 Working experience Pipelines in ADF using Linked Services/Datasets/Pipeline to Extract and load
data from different sources like Azure SQL, ADLS, Blob storage, Azure SQL Data warehouse.
 Working with Azure Data Factory Data transformations.
 Working with Azure Data Factory Control flow transformations such as For Each, Lookup
Activity, Until Activity, Web Activity, Wait Activity, If Condition Activity.
 Good understanding of Spark Architecture including Spark Core, Spark SQL, DataFrames, Driver
Node, Worker Node, Stages, Executors and Tasks.
 Scheduling Notebooks using ADF V2
 Monitoring / Troubleshooting Spark jobs in production.
 Having experience in writing Stored procedure in Databases.
 Having extensive knowledge on Joins.
 Technologies with Azure Data Factory, Azure Data Lake, Azure Data Bricks, ADLS and SQL server.
 Good Experience in ETL, Database design, Data warehousing, Data modelling, Development,
 Hands on experience on ADF activities such as Copy, Stored Procedure, Look-up and Data Flow
activities.
 Knowledge on Azure Data Factory (ADF).
 In ADF I have a good knowledge in creating the pipelines data sets linked services, installing and
configuring integration run time
 In Data Bricks, I have a good idea on creating note books clusters and jobs and also good
knowledge on Pyspark, spark SQL with db utilizes commands and widgets.
 Handled customers issues through collaboration, resolution, or escalation to provide a great end
to end support experience.

TECHNICAL SKILLS:

Languages SQL
Cloud Technologies Azure
Storages Blob Storages, Data Lake Storage
Database ORACLE SQL DB, SQL DW
Data Processing(ETL) SQL , AZURE Data factory, Data Bricks, Pyspark
Other Cloud Services Logic app, Azure Key Vault

Project: 1

Project Name : Snowflake Migration Project

Client : Infosys and JCI

Duration :July 2023– Till date.

Role : Azure Data Factory Developer

Responsibilities as ADF Developer:

• Created data ingestions to the snowflake data warehouse to the usable layers table from
different sources like Oracle, SFTP based locations by using different stored procedures.

• Maintaining metadata in the tables to migrate from Hive to Snowflake to the usable data layer.

• Writing complex queries and used SQL functions while transforming as per requirements.

• For transformation all the SQL queries to be placed in the blob to execute based on dependency
level.

• Create the views for the User requirement.

• Cloned Production data for code modifications and testing.

• Creating Pipelines in ADF using Linked Services/Datasets/Pipeline/to Extract, Transform, load


data from different sources like Azure SQL, Blob storage.

• Created Linked Services/Datasets/Pipeline/ for different data sources like File System, Data Lake
Gen2
• Implemented the alerts in ADF pipelines to trigger notification when pipeline is failed.

• Integrated Linked services with key vaults to read secrets.

• Extract, Transform and Load data from Sources Systems to Azure Data.

• Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL) and
processing the data.

• Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark data
bricks cluster. Understanding business logic and performing ETL on unstructured data using
pipelines from data Factory.

• Experience on creating scope for Azure Key Vault service from data bricks

• Experiences on creating spark Transformations like merge functionality and window functions
customers and internal teams.

Project: 2

Project Name : CDP

Client : City Bank

Duration :May 2022– June 2023.

Role : Azure Data Factory + Azure Data bricks Developer

Responsibilities as ADF Developer:

 Creating Pipelines in ADF using Linked Services/Datasets/Pipeline/to Extract, Transform, load


data from different sources like Azure SQL, Blob storage.

 Implemented the alerts in ADF pipelines to trigger notification when pipeline is failed.

 Integrated Linked services with key vaults to read secrets.

 Extract, Transform and Load data from Sources Systems to Azure Data.

 Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL) and
processing the data in Azure Data bricks.

 Develop ADF objects like creating ADF pipelines, parameters, variables, Datasets and Configuring
Integration run time etc
 Extract, Transform and Load data from Sources Systems to Azure Data Storage services using a
combination of Azure Data Factory, Spark SQL, Azure Data Lake Analytics.

Project: 3
Project Name : MSP

Client : TuneTalk

Duration :April 2018 – Jan 2022.

Role : SQL + Azure Data Factory Developer

Responsibilities as ADF Developer:

• Interacting with client and getting the requirements.

• Development of pipelines, Linked services, Datasets in Azure Data Factory version2.

• Creating a various ADF pipeline to achieve the business scenario.

• Configuration of Azure Cloud services that includes Azure Blob, Azure SQL Db.

• Scheduling the pipelines based on tumbling window for automation job in ADF pipeline.

• Having good experience in creating Parameters,

• Attending the daily stand up calls, sprint planning and grooming meeting.

Education Summary:

Completed MBA (Finance) Graduation from Yogi Vemana University, KADAPA.

DECLARATION:

I hereby declared that all the information provided above is true according to my knowledge and
belief.

(BACHU MAHESWARA REDDY)

You might also like