Pooja Hatgine
+91 8956880490 | [email protected] | Pune, Maharashtra
PROFILE
More than 3 years of experience as Azure Data Engineer and having hands-on experience in designing and implementing data
solutions. Skilled in Azure services including DataBricks, Data Factory, Data Lake, SQL Database, Key Vault. Proficient in
developing ETL pipelines for efficient data extraction, transformation, and loading into Azure Data Lake Storage. Focused on using
Azure technologies to solve data challenges and improve business operations.
TECHNICAL SKILLS
Programming Languages: Python, SQL, PySpark.
Big Data Technologies: PySpark, Delta Lake, Azure Databricks.
Cloud Platforms: Microsoft Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake
Storage, Azure Synapse Analytics).
Database Management: MSSQL Server, PostgreSQL
ETL & Data Integration: Azure Data Factory, Integration Runtime, JDBC.
Tools: Git, GitHub, Jira.
EXPERIENCE
AxelBuzz Tech Solutions LLP August 2023 - Present
Azure Data Engineer
- Project: Data Integration and Processing from on premises SQL database to Azure cloud.
- Data Extraction : Extracted and migrated data from the on premises SQL database to ADLS by creating Self-hosted integration
run time using Azure Data Factory’s copy data activity, seamlessly migrate it to the cloud environment.
- Storage: Uploaded and securely stored the data in Azure Data Lake Storage, ensuring scalable and reliable data access.
- Clean and Process: Utilized Azure Databricks to transformation, optimization and clean the data, enhancing its quality and
readiness for detailed analysis.
- Manage Data: Structured and managed the refined data in Azure SQL Database, optimizing it for efficient querying and fortified
security measures.
- Outcome: Enhanced the effectiveness of Azure cloud services for comprehensive data management and Data modelling
providing actionable insights that drove strategic and informed decision-making.
Atos Syntel February 2022 – August 2023
Azure Data Engineer
- Project: Data pipeline project focused on processing claim data.
- Data Migration: Migrated large-scale on-premises data to Azure Data Lake Storage (ADLS Gen2) using Azure Data Factory
(ADF), ensuring secure, efficient, and structured data ingestion.
- Workflow Automation: Implemented ADF triggers for scheduled and event-based executions, enabling near real-time data
processing and improved operational efficiency.
- Data Transformation: Utilized Azure Databricks for robust data cleansing, enrichment, and transformation, applying complex
business logic to generate analytics-ready datasets.
- Data Integration: Loaded the transformed data from ADLS into Azure SQL Database to support downstream analytics and
reporting.
- Data Movement: Managed seamless and reliable data movement between Azure Data Lake and Azure SQL Database, ensuring
schema alignment and high data integrity.
- Pipeline Optimization: Fine-tuned ETL workflows for performance, fault-tolerance, and scalability by implementing modular
design, dynamic parameterization, and retry policies.
- Business Impact: Enhanced data accuracy, availability, and timeliness, empowering stakeholders with improved insights and
faster, data-driven decision-making.
EDUCATION
Sanjay Bhokare Group of Institute, Miraj
Bachelor of Engineering (E&TC) Aug 2016 - Nov 2020