My client is a leading Melbourne-based company seeking a DevOps Engineer for an initial 6-month contract.Role OverviewThe role involves enhancing automation and improving the functionality of the company website while ensuring security, performance, and scalability.General Skills & ExpertiseAWS: API Gateway, WAF, CloudFront, S3, Lambda, Step Functions, DynamoDB, SQS, VPC, Glue, DMS, RDS, AuroraAPIs: Experience in building infrastructure for both public and internal RESTful APIsEvent-Driven Architecture: Pub/Sub, EventBridgeDevelopment: Python, JavaScript/TypeScript, SQLInfrastructure as Code: CloudFormation, AWS SAM, CDKCI/CD: Jenkins, GitLabKey Focus Areas: AWS Data Services ExpertiseAWS DMSConfiguring migration tasks for both homogeneous and heterogeneous databasesUnderstanding replication methods (full load, CDC, full load + CDC)Troubleshooting performance bottlenecks and schema conversion issuesAmazon RDSExperience with MySQL, PostgreSQL, or AuroraDesigning, optimizing, and scaling relational databasesManaging automated backups, Multi-AZ deployments, and read replicasAWS GlueWriting and optimizing ETL scripts using PySparkManaging Glue Crawlers and Data Catalog for schema discoveryImplementing data partitioning in S3 for improved performanceAWS S3Best practices for large-scale data storage and retrievalConfiguring lifecycle policies, versioning, and data encryptionOptimizing S3 for analytics (e.g., Parquet, CSV, ORC formats)AWS LambdaDeveloping event-driven data processing functions (Python/Node.js)Implementing retries, error logging, and CloudWatch monitoringIntegrating Lambda with S3, RDS, Glue, and Step FunctionsData Engineering & Processing ETL Pipeline DesignBuilding scalable, fault-tolerant ETL workflowsManaging incremental data loads and CDC processesTransforming data using PySpark and SQLSQL & Database ManagementWriting complex queries for data transformation and reportingImplementing indexing, partitioning, and query optimization strategiesBig Data ProcessingExperience with Apache Spark and Athena for large-scale data queryingHandling real-time streaming data using KinesisKey Focus Areas: DevOps & Infrastructure as CodeInfrastructure as Code (IaC)Writing CloudFormation scripts to provision AWS resourcesManaging AWS Glue, Lambda, RDS, and DMS configurations via IaCMonitoring & LoggingConfiguring CloudWatch, NewRelic, and Splunk integrationsSetting up alerts and dashboards for data pipeline health monitoringSecurity & ComplianceImplementing role-based access controls and IAM policiesEnsuring data security best practices across AWS servicesHow do your skills match this job?How do your skills match this job?Sign in and update your profile to get insights.To help fast track investigation, please include here any other relevant details that prompted you to report this job ad as fraudulent / misleading / discriminatory.
#J-18808-Ljbffr