0% found this document useful (0 votes)
25 views4 pages

Srinivas AWS Data Engineer

Srinivas Neelam is an experienced AWS Data Engineer with over 12 years in data engineering, specializing in cloud services, big data applications, and data pipeline architecture. He has a strong technical background in AWS and Azure, with hands-on experience in various tools and programming languages, including Python, Spark, and SQL. Srinivas has successfully developed and managed data pipelines for multiple clients, demonstrating expertise in ETL processes, data migration, and data analysis.

Uploaded by

princeutkarsh6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

Srinivas AWS Data Engineer

Srinivas Neelam is an experienced AWS Data Engineer with over 12 years in data engineering, specializing in cloud services, big data applications, and data pipeline architecture. He has a strong technical background in AWS and Azure, with hands-on experience in various tools and programming languages, including Python, Spark, and SQL. Srinivas has successfully developed and managed data pipelines for multiple clients, demonstrating expertise in ETL processes, data migration, and data analysis.

Uploaded by

princeutkarsh6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Srinivas Neelam +1 (945)-260-9659 srinivas.data1234@gmail.

com

Professional Summary:
· 12+ years of strong experience as an Innovative AWS Data Engineer including designing,
developing and implementation of data models for enterprise - level applications and systems
across AWS cloud and Azure cloud.
· Proven track record in Amazon cloud services, Big Data/Hadoop Applications, and product
development, consistently achieving project success.
· Monitor AWS resources such as S3, EC2, Redshift, Glue jobs, ASG and NLB using Datadog.
· Extensive hands-on experience with AWS (EC2, Redshift, EMR, Elastic search), Hadoop, Python,
Spark, Azure SQL Database, MapReduce, Hive, SQL, and PySpark for tackling complex big data
challenges. Data Architect with expertise in designing scalable architecture for data pipelines
across applications.
· Developed python application to load the data from S3 to Redshift using batch process by
flattening the JSON data using Glue jobs.
· Strong foundation in data analysis using HiveQL, Pig scripts, Spark SQL, and Spark streaming,
with expertise in Sqoop for seamless data integration
· Experience on data migration from On-Prem databases to AWS Cloud on S3.
· Developed ETL applications using MapReduce, Spark-Scala, Hive, PySpark, Azure Databricks
using them in data bricks notebook, and more to handle large data volumes
· hands-on experience with AWS services such as AWS, Glue, S3, Lambda, IAM, and CloudWatch.
· Experience in building the Orchestration on Azure Data Factory for scheduling purposes.
· Creative skills in developing elegant solutions to challenges related to pipeline engineering.
· Skilled in crafting RDBMS elements, including Tables, Views, Data Types, Indexes, Stored
Procedures, Cursors, Triggers, and Transactions.
· Imported data from AWS S3 into Spark RDD, Performed transformations and actions on RDD's.
· Ability to work effectively in cross-functional team environments, excellent communication, and
interpersonal skills. Involved in converting Hive/SQL queries into Spark transformations using
Spark Data frames and Scala.
· Developed custom Kafka producer and consumer for different publishing and subscribing to
Kafka topics.
· Experienced in orchestrating data transfers to Redshift from diverse sources, including Oracle
databases, APIs, and SharePoint sites, by creating and managing data pipelines.
· Good working experience on Spark (spark streaming, spark SQL) with Scala and Kafka.
· Worked on reading multiple data formats on HDFS using Scala.
· Good understanding of NoSQL Data bases and hands on work experience in writing applications
on No SQL databases like Cassandra and Mongo DB.
· Experienced with AWS services to smoothly manage application in the cloud and creating or
modifying the instances.

Technical Skills:
· AWS Services: Amazon S3, Amazon Redshift, AWS Glue, AWS Lambda, Amazon EMR, Amazon
Kinesis, AWS Data Pipeline, AWS DynamoDB, AWS RDS, etc.
· Azure Cloud Platform: ADF, ADLs Gen2, Azure Databricks, Azure SQL Database, Azure Synapse
SQL Datawarehouse.
· Programming Languages: Python, Pyspark, SQL, T-SQL, Oracle PL/SQL, Linux shell scripts
· RDBMS: Oracle 12c and 19c, MySQL, SQL server 2019, Azure SQL, Teradata.
· No SQL: Hbase, Cassandra, Mongo DB.
· Methodologies: Agile, Kanban.
· Tools Used: Eclipse, Putty, Cygwin, MS Office.
· BI Tools: PowerBI, Tableau.
Certifications:
Srinivas Neelam +1 (945)-260-9659 [email protected]

· AWS Certified Developer Associate.


· Microsoft Certified Azure Data Engineer Associate.
· Oracle Database SQL Certified Associate.

Project Experience:
Client: Herbalife Nutrition, Winston-Salem, NC Apr 2020 – Present
AWS Data Engineer
· Experienced in orchestrating data transfers to Redshift from diverse sources, including Oracle
databases, APIs, and SharePoint sites, by creating and managing data pipelines.
· Developed python application to load the data from S3 to Redshift using batch process by
flattening the JSON data using Glue jobs.
· Developed and maintained data pipelines to move and transform data across systems and
platforms
· Involved in file movements between HDFS and AWS S3 and extensively worked with S3 bucket in
AWS
· Created, provisioned different Databricks clusters needed for batch and continuous streaming
data processing and installed the required libraries for the clusters.
· Worked on data cleaning and reshaping, generated segmented subsets using Numpy and
Pandas in Python
· Worked on AWS services such as AWS, Glue, S3, Lambda, IAM, and CloudWatch.
· Performed cleansing and harmonization of large datasets of rows and analyzed it using SQL
queries in AWS Athena.
· Applied Spark advanced procedures like text analytics and processing using the in -
memory processing.
· Used GitHub as a repository version control and publish any change to the pipelines developed
on ADF.
· Utilized AWS Glue for ETL processes, ensuring efficient data movement and transformation.
· Worked in Agile environment and used rally tool to maintain the user stories and tasks.
· Used T-SQL in Azure Synapse for data transformation and processing extensively for needs in
various applications.

Environment: Spark, GitHub, Python, T-SQL, Azure SQL, Python, Kafka, PowerBI, Teradata, Amazon
S3, Amazon Redshift, AWS Glue, AWS Lambda.

Client: Conduent Business Services, Florham Park, NJ Nov 2018 – Mar 2020
AWS Data Engineer
· Designed and implemented end-to-end data pipelines, ingesting data from various sources into
Amazon S3 and transforming it for analytics using AWS Glue.
· Designed, developed, and maintained scalable, reliable data solutions on AWS
· Worked on migration of data from On-prem SQL server to Cloud databases (Azure Synapse
Analytics (DW) & Azure SQL DB).
· Exported aggregated and analyzed data to AWS S3 bucket for storage and further analysis.
· Linked Services for multiple source system (i.e.: Azure SQL Server, ADLS, BLOB, Rest API).
· Created Pipeline’s to extract data from on premises source systems to azure cloud data lake
storage, extensively worked on copy activities and implemented the copy behavior’s such as
flatten hierarchy, preserve hierarchy and Merge hierarchy.
Srinivas Neelam +1 (945)-260-9659 [email protected]

· Implemented Error Handling concept through copy activity. Used Azure functions to respond to
database changes.
· Leveraged AWS Lambda for serverless data processing solutions
· Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines,
monitored the scheduled Azure Data Factory pipelines and configured the alerts to get
notification of failure pipelines.
· Optimized data pipelines, data warehousing, and processing systems on AWS, adhering to best
practices.
· Create and maintain optimal data pipeline architecture in cloud Microsoft Azure using Data
Factory and Azure Databricks.
· Conducted tuning and optimization on Glue and Lambda jobs for peak performance.
· Developed, tested and deployed Azure logic apps to automate data processing tasks in order to
support data analysis and reporting, where email notifications were sent when a pipeline fails.
· Optimized data pipelines, data warehousing, and data processing systems on AWS in alignment
with industry best practices
· Implemented a generic ETL framework with high availability for bringing related data for
Hadoop & Cassandra from various sources using spark.
· Utilized AWS Glue for ETL processes, ensuring efficient data movement and transformation.
· Used SQL Azure extensively for database needs in various applications.
· Created multiple dashboards in PowerBI for multiple business needs.
· Created different types of triggers to automate the pipeline in ADF

Environment: Amazon S3, Amazon Redshift, AWS Glue, AWS Lambda, Spark, Azure DevOps, Azure
Synapse Analytics, Azure SQL, ADF, Azure Databricks, Azure Data Lake, HDFS, Azure logic apps, Hive,
Cassandra, Azure functions, Python, Kafka, PowerBI, Teradata,

Client: HP Inc, India. Jan 2016 – Oct 2018


Azure Data Engineer
· Created Pipelines in ADF using linked services to extract, transform and load data from different
sources like Azure SQL, Blob storage, Azure SQL Data warehouse.
· Developer spark applications using Pyspark and Spark-SQL for data extraction, transformation
and aggregation from multiple file formats for analyzing and transforming the data to uncover
insights into the customer usage patterns.
· Developed mapping document to map columns from source to target.
· Performed ETL using Azure Data Bricks. Migrated on-premise Oracle ETL process to Azure
Synapse Analytics.
· Used stored procedure, lookup, execute pipeline, data flow, copy data, azure function features
in ADF.
· Worked on creating star schema for drilling data. Created Pyspark procedures, functions,
packages to load data.
· Extensively worked on relational databases such as SQL Server, Oracle and MySQL.
· Built the data pipeline using Azure Service like Data Factory to load the data from Legacy SQL
server to Azure Database.
· Build and developed data strategy and road map for Analytics stream.
· Implemented automation for azure services using azure runbooks, azure logic apps.
· Orchestrated data integration pipelines in ADF using various activities like get metadata, lookup,
for each, wait, execute pipeline, set variable, filter until, etc.
Srinivas Neelam +1 (945)-260-9659 [email protected]

Environment: Azure Data Factory, Azure Data Bricks, Azure SQL DB, Azure SQL DW, Azure Data Lake,
SQL Server, Oracle, SSMS, SQL Developer, BLOB storage.

Client: HP Inc, India. Jan 2011 – Dec 2015


Oracle Data Integrator (ELT) Developer
· Worked in Extraction, Transformation and Loading of data from multiple sources such as Oracle,
MS SQL Server, Flat Files and XML Files into ODS (Operational Data Store) and EDW (Enterprise
Data Warehouse) systems.
· Implemented the Change Data Capture (CDC) feature of ODI to minimize the data load times.
· Created ODI Packages, Jobs of various complexities and automated process data flow.
· Handled various ODI Integration types of Control Append, Incremental Update and Slowly
Changing Dimension.
· Implemented mappings in ODI 12c using Aggregate, Expression, Filter, Join, Lookup and Split
components and Implemented ODI 12c mapping to load multiple target tables.
· Worked in package toolbox tools which are OdiBeep, OdiSendMail, OdiWaitForChildSession,
OdiOSCommand, OdiSftpGet, OdiSftpPut.
· Handled SCD Type-1, Type-2, and Type-3 in ODI.
· Designed Interfaces to load data from Flat files, CSV files in to staging area (Oracle) and load into
Oracle data warehouse.
· Expertise in creating stored procedures, functions, packages, triggers, collections, and bulk
collections using PL/SQL.
· Extensive experience in Linux/Unix shell scripting development tasks.
· Created Shell Scripts to search files between different time ranges and send email including
attached files to business users.
· Worked on email alert for success scenario emails as well as negative scenario emails to end
users in ODI.
· Participated in daily stand-up meeting to give update on project tasks.
· Worked in ODI variables which are Declare Variable, Refresh Variable, Set Variable and Evaluate
Variable.
· Worked on encrypted gpg files to decrypted and loaded to target database as an automated
process.
· Created load plans to run the scenarios in a hierarchy of sequential and parallel steps.
· Involved in UAT with business and reporting team to make sure that the ODI processes
populating correct data.
· Worked extensively on Cursors and Ref Cursors in PLSQL.
· Developed PL/SQL Scripts (Views, Procedures, Cursors, and error handling) in Oracle.
· Created DB-Links to fetch records from remote DB to local DB.
· Wrote complex SQL queries by using joins, sub queries and correlated sub queries.
· Created PL/SQL tables, %TYPE, %ROWTYPE and PL/SQL records.

Environment: ODI 11g/12c, Oracle 11g/12c, SQL Server 2008 R2/2012, Toad, SSMS, Putty, WinSCP
and HPSM ticketing Tool, Version Control SVN.
Education Details:
· Bachelor of computer science, Osmania University, 2004 Passed out.

You might also like