My client is a leading Melbourne-based company seeking a DevOps Engineer for an initial 6-month contract.
Role Overview
The role involves enhancing automation and improving the functionality of the company website while ensuring security, performance, and scalability.
General Skills & Expertise
* AWS: API Gateway, WAF, CloudFront, S3, Lambda, Step Functions, DynamoDB, SQS, VPC, Glue, DMS, RDS, Aurora
* APIs: Experience in building infrastructure for both public and internal RESTful APIs
* Event-Driven Architecture: Pub/Sub, EventBridge
* Development: Python, JavaScript/TypeScript, SQL
* Infrastructure as Code: CloudFormation, AWS SAM, CDK
* CI/CD: Jenkins, GitLab
Key Focus Areas: AWS Data Services Expertise
AWS DMS
* Configuring migration tasks for both homogeneous and heterogeneous databases
* Understanding replication methods (full load, CDC, full load + CDC)
* Troubleshooting performance bottlenecks and schema conversion issues
Amazon RDS
* Experience with MySQL, PostgreSQL, or Aurora
* Designing, optimizing, and scaling relational databases
* Managing automated backups, Multi-AZ deployments, and read replicas
AWS Glue
* Writing and optimizing ETL scripts using PySpark
* Managing Glue Crawlers and Data Catalog for schema discovery
* Implementing data partitioning in S3 for improved performance
AWS S3
* Best practices for large-scale data storage and retrieval
* Configuring lifecycle policies, versioning, and data encryption
* Optimizing S3 for analytics (e.g., Parquet, CSV, ORC formats)
AWS Lambda
* Developing event-driven data processing functions (Python/Node.js)
* Implementing retries, error logging, and CloudWatch monitoring
* Integrating Lambda with S3, RDS, Glue, and Step Functions
Data Engineering & Processing ETL Pipeline Design
* Building scalable, fault-tolerant ETL workflows
* Managing incremental data loads and CDC processes
* Transforming data using PySpark and SQL
SQL & Database Management
* Writing complex queries for data transformation and reporting
* Implementing indexing, partitioning, and query optimization strategies
Big Data Processing
* Experience with Apache Spark and Athena for large-scale data querying
* Handling real-time streaming data using Kinesis
Key Focus Areas: DevOps & Infrastructure as Code
Infrastructure as Code (IaC)
* Writing CloudFormation scripts to provision AWS resources
* Managing AWS Glue, Lambda, RDS, and DMS configurations via IaC
Monitoring & Logging
* Configuring CloudWatch, NewRelic, and Splunk integrations
* Setting up alerts and dashboards for data pipeline health monitoring
Security & Compliance
* Implementing role-based access controls and IAM policies
* Ensuring data security best practices across AWS services
How do your skills match this job?
How do your skills match this job?
Sign in and update your profile to get insights.
To help fast track investigation, please include here any other relevant details that prompted you to report this job ad as fraudulent / misleading / discriminatory.
#J-18808-Ljbffr