Job Opportunity
The Data Engineering Team is expanding to work at the forefront of cutting-edge defence technologies. We are seeking experienced professionals in data engineering to design, deploy and maintain datastores/lakes.
1. Design and Deployment: Develop robust and scalable data architectures to support various defence-related projects.
2. Data Architecture Implementation: Implement different data architectures to ensure efficient data processing, storage, and retrieval.
3. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and translate them into effective data solutions.
4. ETL Processes: Develop and manage ETL (Extract, Transform, Load) processes to ensure the smooth ingestion and transformation of data from various sources.
5. Performance Optimisation: Monitor and optimise the performance of datastores and data pipelines to ensure high availability and reliability.
6. Data Quality and Integrity: Ensure data quality and integrity through comprehensive data validation, cleansing, and auditing processes.
7. Security Measures: Implement data security measures and best practices to protect sensitive information.
8. Integration: Collaborate with cross-functional teams to integrate data solutions with other systems and applications.
9. Infrastructure Management: Utilise infrastructure as code tools like Terraform to manage and provision data infrastructure.
10. Professional Development: Stay updated with emerging data technologies and industry trends, advocating for their adoption where beneficial.
Requirements
1. Education: Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field.
2. Experience: Proven experience in data engineering, with demonstrable skills in designing, deploying, and maintaining data pipelines, stores and data lakes.
3. Skills: Strong SQL skills and experience with programming languages such as Python or Scala.
4. Knowledge: Strong understanding and experience with different data architectures, including relational and non-relational databases.
5. Proficiency: Proficiency in ETL processes and tools, ensuring efficient data ingestion and transformation.
6. Cloud Experience: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services (e.g., Redshift, BigQuery, Azure Data Lake).
7. Big Data Technologies: Familiarity with big data technologies (e.g., Hadoop, Spark) and data warehousing solutions.
8. Infrastructure Knowledge: Knowledge of infrastructure as code tools like Terraform.
9. Security Principles: Knowledge of data security principles and best practices.
10. Containerisation: Knowledge of containerisation and orchestration technologies (Docker, Kubernetes).
11. Security Clearance: Current or reinstatable AGSVA security clearance (desirable).
What We Offer
* Opportunities for Growth: Opportunities for professional development, internal research and development, and working with a senior mentor.
* Flexible Work Arrangements: Flexible work arrangements integrated within our employee wellbeing program.