Know ATS Score
CV/Résumé Score
  • Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: AWS Data Engineer Python/Spark.
Bengaluru Jobs | Expertini

Urgent! AWS Data Engineer - Python/Spark - Local Job Opening in Bengaluru

AWS Data Engineer Python/Spark



Job description

<p><p><b>Role Overview :</b><br/><br/>We are seeking a highly skilled AWS Data Engineer to design, develop, and maintain scalable data pipelines and cloud-based data solutions.

<br/><br/>The ideal candidate should have hands-on expertise in AWS cloud services, data lake/warehouse, ETL/ELT pipelines, and big data frameworks to support analytics and business insights.<br/><br/><b>What you do : </b><br/><br/>- Design, build, and optimize data pipelines for structured/unstructured data.<br/><br/></p><p>- Develop and manage ETL/ELT workflows using AWS services (Glue, Lambda, EMR, Kinesis).<br/><br/></p><p>- Work with data storage solutions such as S3, Redshift, DynamoDB, RDS.<br/><br/></p><p>- Implement streaming & batch data processing with Spark, Kinesis, Kafka.<br/><br/></p><p>- Ensure data quality, governance, and security standards across systems.<br/><br/></p><p>- Collaborate with Data Scientists, Analysts, and stakeholders to enable self-service analytics and ML pipelines.<br/><br/></p><p>- Monitor, troubleshoot, and improve pipeline performance.<br/><br/></p><p>- Apply CI/CD and Infrastructure as Code practices for deployment (Terraform, CloudFormation, Jenkins).<br/><br/><b>What we expect : </b><br/><br/>- Strong programming in Python, PySpark, or Scala.<br/><br/></p><p>- <b>Hands on experience with SageMaker</b><br/><br/></p><p>Hands-on experience with AWS services : <br/><br/></p><p>- <b>Compute & Storage : </b> S3, EC2, Lambda, EMR<br/><br/></p><p>- <b>Data Integration : </b> Glue, Kinesis, Step Functions<br/><br/></p><p>- <b>Databases/Warehousing : </b> Redshift, RDS, DynamoDB<br/><br/></p><p>- Knowledge of data modeling, schema design, data lakes, and data warehouses.<br/><br/></p><p>- Experience with big data frameworks (Spark, Hadoop, Kafka).<br/><br/></p><p>- Proficiency in SQL & database optimization.<br/><br/></p><p>- Knowledge of CI/CD pipelines, Git, DevOps practices.<br/><br/><b>Good to Have :</b><br/><br/>- Familiarity Databricks, or ML pipelines.<br/><br/>- Exposure to BI tools (QuickSight, Tableau, Power BI).<br/><br/>- Containerization & orchestration (Docker, Kubernetes, EKS).</p><br/></p> (ref:hirist.tech)


Required Skill Profession

Computer Occupations



Your Complete Job Search Toolkit

✨ Smart • Intelligent • Private • Secure

Start Using Our Tools

Join thousands of professionals who've advanced their careers with our platform

Rate or Report This Job
If you feel this job is inaccurate or spam kindly report to us using below form.
Please Note: This is NOT a job application form.


    Unlock Your AWS Data Potential: Insight & Career Growth Guide