Experience:
8+ years of hands-on programming and software development.
Proven ability to design and build automation frameworks.
Familiarity with observability tools and microservices architecture.
The successful applicant will use a broad range of tools, languages, and frameworks.
We encourage applicants who might have a strong number of skills below, even if you do not know all of them.
Python and scripting: Strong hands-on development in Python plus pragmatic shell scripting in Linux environments.
AWS data stack: Commercial experience with AWS Glue, Spark/PySpark and S3 for large-scale data processing.
Orchestration: Building, scheduling and operating pipelines with Airflow including DAG design, retries and SLAs.
SQL and RDBMS: Solid SQL for data transformation and analysis, with exposure to Teradata or Oracle.
CI/CD and shift-left testing, DevSecops: Unit, integration and contract tests embedded in pipelines using Git-based workflows and common tools such as TeamCity, GitHub Actions, Jenkins or Octopus.
Observability: Practical use of logging, metrics and tracing with tools like CloudWatch and Splunk to monitor production health.
Cloud-native engineering: Designing for scalability, reliability and cost on AWS, following security and governance standards.
AI skills: Effective use of AI coding assistants and test-generation tools e.g. (e.g., GitHub Copilot, Roo code) to accelerate development while maintaining quality.
Ab Initio or SAS: Prior experience integrating or migrating legacy ETL workloads.
Data warehousing: Knowledge of Redshift, Athena, EMR or Iceberg formats.
API and microservices: Experience testing and integrating with RESTful services and event streams.
Test automation frameworks: Familiarity with Playwright, DevTest, Appium, Sahi or similar, plus contract testing.
Dashboards and reporting: Building engineering or quality dashboards for delivery and production health.
Team leadership: Mentoring engineers and uplifting standards across squads.