0% found this document useful (0 votes)
66 views3 pages

Divya Namdev Resume

Divya is a Senior Data Engineer with over 8 years of experience, specializing in PySpark and Azure technologies. She has successfully led projects that significantly reduced development efforts and costs for clients like ABInBev and Walmart, while also implementing end-to-end data solutions. Divya holds a B.E. in Electronics and Communication Engineering and has received multiple awards for her contributions in the field.

Uploaded by

divya namdev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views3 pages

Divya Namdev Resume

Divya is a Senior Data Engineer with over 8 years of experience, specializing in PySpark and Azure technologies. She has successfully led projects that significantly reduced development efforts and costs for clients like ABInBev and Walmart, while also implementing end-to-end data solutions. Divya holds a B.E. in Electronics and Communication Engineering and has received multiple awards for her contributions in the field.

Uploaded by

divya namdev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Divya Pune, India

9669979954

Namdev [email protected]
https://www.linkedin.com/in/divya-namdev-129480172/

Professional Summary
 Total 8+ years of Experience in Data Engineer Industry and 5+ years of experience in
pyspark and Azure.
 Experience on Spark SQL, Dataframes, RDD, Hive tables,snowflake
 Experience in implementing end to end solutions for Data Platform covering all aspects
like Data Transformations, Data Quality, Observability, Governance and Restartability of
Scalable data Pipelines. This was done using various tools in Azure Ecosystem like ADF
pipelines, Databricks, Azure SQL, Azure Devops
 Build various utilities to reduce development efforts by 70% including custom metadata-
driven notebook development for most use-cases.
 Building optimized and scalable codes.
 Deployed codes on PROD environment using CI/CD pipelines in Azure Devops.
 Created Utils files with various functions which can be used across multiple projects.
 Completed Azure AZ-900 Certification

Skills
 Primary:  Technical:
Product Development, Team Management, Azure Data Bricks, Azure Data Lake, Azure
Leadership, Agile Methodology, Data Data Factory, Azure SQL Data Warehouse,
Analysis Azure SQL DB, SQL Queries, Database
 Languages: Definition, Schema Design, Azure Blob
Spark, PySpark, SQL Storage, Azure Devops, Log Analytics, Azure
LogicApps, CI-CD

Experience
September 2021 – CURRENT
Senior Data Engineer/Fractal Analytics, Pune
Project- AUS MW SRM | US MW DCOM | Category Analytics | Brewdat Anaplan

 Worked as Team Lead with team size of 10 in Anheuser-Busch InBev (ABInBev), biggest
beverage company in the world as a client. Team was responsible for successfully creating
Generic utility Delta Lake Package build in Databricks (PySpark), ADF and Azure Data Lake
based on Medallion Architecture and reusability principles catering to multiple use cases in
Logistics functional area. This helped the client reduce the development effort drastically by
80%. This new optimized utility also helped the client to cut down on compute cost by 60%.
 The generic package was pretty successful in onboarding various teams in Logistics and was
appreciated by the ABInBev leadership team. Apart from reusability, it also included Data
Ingestion, Transformation, custom logging, Pipeline Monitoring, Data Quality, Unit Catalog
modules, using skills like Pyspark, Databricks, ADF, SQL Queries, Database Definition,
Schema Design.
 Developed ADF end to end Pipelines for pulling the data from source (SQL tables) and then data
cleaning, transformations, pushing the data into final SQL tables and refreshing PowerBI dashboard.
Also sends mail in case of failure of any of the activity
 Have documented the complete project modules like architecture document, data model
documentation and project complete code flow documentation.
 Connected with client on alternate days for solving issues and understanding the further
requirements.
 Created pre and post validation scrips, data cleaning, data transformations, loaded data in tables,
created unit test scripts using great_expectations

October 2019 – September 2021


Application Development Analyst/ Accenture, Pune
Project- Walmart Inc. | Spark developer

 Responsible for creating high level and low-level design documents after gathering the
functional and technical requirements.
 Collaborating with other programmers to design and implement features.
 Performed unit testing after the development and created the unit test document.
 Developing and maintaining the IDF tool code base to transform the Walmart incremental
and historical data and generating the business reports from the transformed data.
 Experience on Spark SQL, Dataframes, RDD, Hive tables
 Working on azure ADF to build the pipelines for data movement from different source
systems.
 Created different ADF pipelines to copy data from source to target and process the file end-
to-end till the SQL Dataware house.
 Good understanding on version control system : Git

Technologies Involved: Azure Data Bricks, Pyspark, Azure Data Lake Store, Azure SQL Data Warehouse,
Azure Data Factory, Azure SQL DB, Azure LogicApps

October 2016 – October 2019


System Engineer/ Tata Consultancy Services Limited, Pune
Project- BARCLAYS | Hadoop Developer

 Worked on Hadoop Cluster with current size of 115 nodes in CDH 5.10.1 in Parcels and process
around 1.5PTB capacity of data.
 Used Spark API to perform analytics of data on HIVE.
 Used SQL queries for data analysis and testing.
 Used Impala for querying HDFS to achieve better performance.
 Supported code/design analysis, strategy development and project planning.
 Worked with different file formats such as Text, Sequence, ORC and Parquet.
 Involved in creating Hive tables, loading with data and writing Hive queries which will run internally
in a MapReduce way.
 Queried results and loaded data in Hive partition tables using Partitioning and Bucketing Strategy.
 Collaborated with the Infrastructure, Network, Database, Application and BI teams to ensure data
quality and availability.

Education
2012 - 2016
Samrat Ashok Technological Institute, Vidisha
(SATI-Vidisha) — B.E. (Electronics and
communication Engineering)
CGPA – 8.6

Awards & Certifications


 Received Kaizen award in Fractal Analytics for excellent work in the project.
 Worked on Microsoft audit document and received appreciation from Microsoft team as a part of
Fractal initiatives.
 Appreciation of outstanding contribution to the organization, awarded multiple On the Spot (Team)
Awards and Best Team Awards.
 Appreciated by Managers many times over completing tasks on time at any cost.
 Received appreciation from client with 10 NPS score in most of the projects.
 Prepared Pyspark document for Fractal internal trainings.
 Many Appreciations from Walmart Client on go-live in production.

You might also like