Come and join a thriving company and become part of a diverse global collective of free-thinkers, entrepreneurs, and industry experts who are all driven to use technology to reimagine what’s possible. Learn more about why Capgemini.
Let’s talk about the roles and responsibilities
As a Data Engineer, you will be responsible for designing and implementing scalable data pipelines using Databricks, Apache Spark, and Azure Data Factory. Your role involves ensuring seamless data integration, transformation, and optimization across various systems. You will collaborate with cross-functional teams to ensure high-quality data governance, compliance, and reporting solutions, while troubleshooting and maintaining performance across the data infrastructure in the Financial Services domain.
Key Responsibilities:
1. Design and implement end-to-end data pipelines using Databricks and Apache Spark
2. Develop and manage data workflows using Azure Data Factory, ensuring efficient data movement and transformation across systems
3. Extract, transform, and load data from sources such as Oracle SQL Server 2014, flat files, and APIs
4. Develop ETL packages using SSIS and create visualizations and reports via SSRS
5. Collaborate with stakeholders to gather and understand data requirements
6. Optimize data processing for performance, scalability, and reliability
7. Ensure proper documentation, data governance, and compliance with industry standards
8. Troubleshoot and resolve issues related to data integration, transformation, and delivery
9. Participate in code reviews and contribute to best practices within the data engineering team
Mandatory Skills & Technologies:
1. Strong hands-on experience with Databricks
2. Proficiency in Apache Spark for distributed data processing
3. In-depth experience with Azure Data Factory for orchestration and automation
4. Solid working knowledge of Oracle SQL Server 2014 (querying, tuning, procedures)
5. Experience in building and maintaining SSIS packages for ETL operations
6. Experience using SSRS for creating reports and dashboards
Preferred Skills:
1. Working knowledge of Cosmos DB (NoSQL database)
2. Domain experience in Financial Services, Banking, or Insurance data ecosystems
3. Familiarity with CI/CD for data pipelines and version control tools (e.g., Git)
4. Understanding of data security practices and compliance standards (e.g., GDPR, SOC2)
About Capgemini
At Capgemini, we are more than just a business. We are a diverse global collective of strategic and technological experts passionate about leveraging technology to help our clients, our people, and our communities shape the future. Our commitment to diversity and inclusion is reflected in recognitions such as the Australian Workplace Equality Index awards.
We foster a safe, flexible, and inclusive culture where everyone can bring their authentic selves to work. Our staff-led community groups exemplify our dedication to inclusion. We are also committed to sustainability, aiming for carbon neutrality by 2025 and becoming a net-zero business by 2040.
Empower yourself with access to premier learning platforms, certifications, and development opportunities, encouraging at least 40 hours of training annually.
Our values—honesty, boldness, trust, freedom, team spirit, modesty, and fun—guide us and have earned us recognition as one of the World’s Most Ethical Companies for 12 consecutive years.
Our Commitment to Diversity & Inclusion
If you don’t meet every requirement, we still encourage you to apply. We value diverse experiences and perspectives. If you need support during the application process due to a disability, gender diversity, or neurodivergence, contact us at talentacquisitionaunz@capgemini.com.
Information Security and Compliance
Capgemini Australia maintains management systems compliant with ISO9001, ISO27001, and ISO14001. We are committed to delivering secure solutions through state-of-the-art processes and continual improvement, aligning with industry best practices and regulatory requirements.
#J-18808-Ljbffr