Data Engineer
1. Training and development to build your skills and career - at Deloitte we believe in investing in our best assets, our people! You will have access to world class training and funding towards industry and other professional certifications
2. Flexible work arrangements - work in a way that suits you best
3. Rewards platform - your hard work won't go unnoticed at Deloitte
What will your typical day look like?
The Data Engineer will be a key member of our team. This role involves constructing robust and scalable data pipelines, exploring and implementing generative algorithms, and facilitating the seamless flow of data between computational systems. You will collaborate with a multidisciplinary team of Data Scientists & Machine Learning Engineers to meet organisational objectives and supports the Engineering Lead in the execution of the GenAI SDT program of work.
You will be responsible for:
The design, development and maintenance of our platform and client solutions based on technologies including:
4. UNIX/Linux, Docker, Kubernetes, OpenShift v3
5. Ansible, Git
6. Cloud technologies and platforms such as Amazon AWS and Azure
7. Analytics and search platforms (ElasticSearch)
8. Messaging platforms (ActiveMQ, AWS SQS and SNS)
9. Java, Maven
10. API and ESB platforms (e.g. MuleSoft, Redhat)
11. Transition to BaU plus ongoing development of client solutions
Key Accountabilities:
12. Data Architecture & Pipeline Development: Design, build, and manage advanced data solutions, including data lakes and databases.
13. Generative AI Implementation: Utilise generative algorithms to produce synthetic data, creating realistic simulations for training and validation purposes.
14. Data Governance: Ensure data quality and data governance standards are upheld through careful design and documentation. Ensure that the data pipelines and databases align with the Deloitte security protocols and industry standards.
15. Team Collaboration: Partner with various stakeholders to identify business requirements and translate them into data solutions.
16. Technical Leadership: Provide technical mentorship to junior team members and advise on best practices and technologies in the data engineering domain.
17. Innovation and Research: Evaluate emerging technologies and methodologies in data engineering and Generative AI to ensure the organisation remains at the forefront of the industry.
18. AI Enablement: Practical experience in building pipelines that enable AI / ML technologies.
19. Data Governance: Understanding of data quality, data lineage, and data governance principles.
20. Communication Skills: Ability to convey complex information in an accessible manner to both technical and non-technical stakeholders.
Enough about us, let’s talk about you.
21. A minimum of 5 years professional experience in Data Engineering, preferably with a focus on Generative AI
22. Bachelor's or Master’s degree in Computer Science, Engineering, Mathematics, or a related field is preferrable
23. Experience with platform and software development, release management in a mature managed services environment
24. Cloud technologies and platforms such as Azure or Amazon AWS
25. UNIX/Linux deployment and configuration management
26. TCP/IP networking, databases, messaging and security
27. Knowledge of and experience with DevOps tools and techniques, such as Infrastructure as Code and Immutable Infrastructure
28. Experience with CI tools such as Jenkins, Hudson, Bamboo, Thoughtworks Go
29. Security protocols such as OAuth, SAML or OpenID Connect
30. Must have an ability to effectively communicate and work with people with a diverse range of skills and experience