0% found this document useful (0 votes)
3 views1 page

JD Azurede MFT 4+yrs

The document outlines a job description for an Azure Data Engineer with 4-8 years of experience, focusing on skills in Azure Cloud Technologies, Azure Data Factory, Azure Databricks, and PySpark. Responsibilities include designing and optimizing data workflows, developing CI/CD pipelines, and ensuring data quality across various data sources. Candidates should have a relevant degree and strong analytical skills, with experience in big data processing and ETL workflows.

Uploaded by

sambal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views1 page

JD Azurede MFT 4+yrs

The document outlines a job description for an Azure Data Engineer with 4-8 years of experience, focusing on skills in Azure Cloud Technologies, Azure Data Factory, Azure Databricks, and PySpark. Responsibilities include designing and optimizing data workflows, developing CI/CD pipelines, and ensuring data quality across various data sources. Candidates should have a relevant degree and strong analytical skills, with experience in big data processing and ETL workflows.

Uploaded by

sambal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Role: Azure Data Engineer

Experience: 4-8 Years

Location: Remote

Mandatory Skills: Azure Cloud Technologies, Azure Data Factory, Azure Databricks (Advance Knowledge),
PySpark, CI/CD Pipeline (Jenkins, GitLab CI/CD or Azure DevOps), Data Ingestion, SOL
Seeking a skilled Data Engineer with expertise in Azure cloud technologies, data pipelines, and big data
processing. The ideal candidate will be responsible for designing, developing, and optimizing scalable data
solutions.

Responsibilities

1. Azure Databricks and Azure Data Factory Expertise:


 Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure
Databricks and Azure Data Factory.
 Provide expertise in configuring and managing data pipelines within the Azure cloud environment.
2. PySpark Proficiency:
 Possess a strong command of PySpark for data processing and analysis.
 Develop and optimize PySpark code to ensure efficient and scalable data transformations.
3. Big Data & CI/CD Experience:
 Ability to troubleshoot and optimize data processing tasks on large datasets. Design and implement
automated CI/CD pipelines for data workflows.
 This involves using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate the building, testing,
and deployment of data pipelines.
4. Data Pipeline Development & Deployment:
 Design, implement, and maintain end-to-end data pipelines for various data sources and destinations.
 This includes unit tests for individual components, integration tests to ensure that different components
work together correctly, and end-to-end tests to verify the entire pipeline's functionality.
 Familiarity with Github/Repo for deployment of code
 Ensure data quality, integrity, and reliability throughout the entire data pipeline.
5. Extraction, Ingestion, and Consumption Frameworks:
 Develop frameworks for efficient data extraction, ingestion, and consumption.
 Implement best practices for data integration and ensure seamless data flow across the organization.
6. Collaboration and Communication:
 Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
 Communicate effectively with stakeholders to gather and clarify data-related requirements.

Requirements

1. Bachelor’s or master’s degree in Computer Science, Data Engineering, or a related field.


2. 4+ years of relevant hands-on experience in data engineering with Azure cloud services and advanced
Databricks.
3. Strong analytical and problem-solving skills in handling large-scale data pipelines.
4. Experience in big data processing and working with structured & unstructured datasets.
5. Expertise in designing and implementing data pipelines for ETL workflows.
6. Strong proficiency in writing optimized queries and working with relational databases.
7. Experience in developing data transformation scripts and managing big data processing using PySpark..

If you are interested, please contact "Saloni Jain" on email: [email protected] | Mob/WhatsApp: +91-
8447470990 | www.moveforward.ai

You might also like