Job details- Remote job- 6-month contract- Salary: based on experience- 8-5 schedulePOSITION SKILLS & REQUIREMENTS- Experience in database architectures and data pipeline development- Experience working with loading and extracting data from Glue, SQL, DDL, and/or DML commands- Working knowledge of AWS data technologies- Ability to handle unstructured, and semi-structured data, working in a data lake environment- Ability to work in an Agile environment- Working knowledge of software development tools and methodologies- AWS Certification (required within 6 months of hire)RESPONSIBILITIES- Under the supervision of Big Data Consultants and Architects, work with multiple clients simultaneously to implement enterprise-wide scalable operations on AWS- Build ETL code following defined requirements and mapping documents- Build SQL queries using MongoDB, Oracle, SQL Server, MariaDB, MySQL, Redshift, Or Athena- Develop code using Python, Scala, and/or PySpark- Work with technologies such as Spark, Hadoop, and/or Kafka, etc.- Implement assigned roadmap activities; record time spent and work completed accurately in Jira and Mavenlink- Update Jira with detailed descriptions of work performed and participate in daily scrum meetings- Collaborate with other data engineers for interchange of technical ideas- Establish credibility and build impactful relationships with our customers to enable them to be cloud advocates- Comply with AWS Industry best practices and the clients' standards- Configure AWS services to support ETL, data warehousing and potentially machine learning- Write infrastructure as code scripts in CDK or Terraform to help make service deployment more efficient and consistent**Job Type**: Contract**Experience**:- big data engineer: 3 years (required)- database architectures: 3 years (required)- AWS data technologies: 3 years (required)**Language**:- English (required)License/Certification:- AWS Certification (required)