AI/ML Security Engineer for Service Support of AI/ML Integration Solutions
Position Overview:
Phoenix ICT Solutions is seeking a highly skilled AI/ML Security Engineer to support the integration of Artificial Intelligence (AI) and Machine Learning (ML) solutions. The selected engineer will be responsible for ensuring the security, compliance, and performance of AI/ML systems across their lifecycle. The role will involve securing the AI/ML solutions during development, deployment, and post-deployment, with a focus on data privacy, regulatory compliance, and maintaining robust security protocols throughout the integration process.
Key Responsibilities:
AI/ML Security Design & Development:
Work closely with AI/ML developers and architects to design and implement security measures at each stage of the AI/ML solution lifecycle.
Ensure that AI/ML systems are secure by design, incorporating strong data protection protocols, model security, and secure deployment strategies.
Develop and enforce security standards and best practices in AI/ML models, including access control, encryption, and privacy measures.
Model Security & Privacy Compliance:
Implement security strategies for AI/ML model development and deployment, including secure data storage, secure training pipelines, and access controls to prevent unauthorised model manipulation.
Ensure AI/ML models comply with relevant privacy laws and regulations (such as GDPR, CCPA, etc.) throughout their lifecycle.
Establish frameworks for ensuring the confidentiality and integrity of sensitive data used in model training and prediction phases.
Perform regular security audits to ensure compliance with internal and external security standards.
Threat Detection & Vulnerability Management:
Conduct threat modelling to identify potential security risks to AI/ML systems and propose mitigation strategies.
Perform regular penetration testing and vulnerability assessments on AI/ML models, APIs, and infrastructures to identify and remediate potential security flaws.
Stay informed on emerging security threats specific to AI/ML technologies, including adversarial machine learning and model inversion attacks.
ICT Accreditation & Compliance:
Work closely with the ICT team to ensure AI/ML solutions meet necessary compliance and accreditation standards, passing internal ICT policies, security tests, and external audits.
Assist in facilitating the ICT accreditation process for AI/ML systems, ensuring that all necessary documentation, tests, and certifications are obtained.
Ensure adherence to security frameworks and standards such as ISO/IEC 27001, NIST, and others relevant to AI/ML deployment.
Security Incident Response & Monitoring:
Develop and implement real-time monitoring solutions for AI/ML systems to detect and respond to security incidents swiftly.
Establish protocols for responding to security breaches or incidents, ensuring prompt identification, containment, and resolution.
Continuously monitor AI/ML systems for anomalies and suspicious activity, leveraging advanced security tools to identify potential threats before they escalate.
Post-Deployment Support & Maintenance:
Provide ongoing security support for AI/ML solutions after deployment, including patch management, updates, and troubleshooting for security issues.
Ensure continuous improvement of AI/ML security protocols based on real-world use and emerging security challenges.
Collaborate with business stakeholders to assess the operational security needs of deployed AI/ML systems and respond to evolving security requirements.
Collaboration & Stakeholder Engagement:
Work closely with business stakeholders, project managers, and AI/ML development teams to ensure that security is integrated into all aspects of the AI/ML solution lifecycle.
Act as a liaison between the security and AI/ML teams, educating stakeholders on the importance of security considerations throughout the integration process.
Provide security training and best practice recommendations to the broader team.
Documentation & Reporting:
Maintain thorough documentation of security procedures, risk assessments, compliance activities, and security incidents.
Provide regular security reports to senior management, highlighting the state of security for AI/ML solutions, risks, and mitigations taken.
Ensure that all security activities, findings, and outcomes are properly documented to meet internal auditing and regulatory requirements.
Qualifications:
Education: Bachelor's degree in Computer Science, Cybersecurity, Engineering, or a related field. Advanced degrees (Master's or PhD) in relevant areas are a plus.
Certifications: Security certifications such as CISSP, CISM, CISA, or other relevant certifications in AI/ML security are highly desirable.
Experience:
Proven experience in AI/ML security, machine learning, and data privacy.
Hands-on experience in securing machine learning models, including techniques like adversarial machine learning and model protection.
Experience with cloud environments (AWS, Azure, GCP) and securing AI/ML workloads on these platforms.
Familiarity with ICT compliance standards (e.g., NIST, ISO/IEC 27001, GDPR, etc.).
Experience with DevSecOps practices and integrating security into CI/CD pipelines for AI/ML development and deployment.
Strong knowledge of machine learning frameworks (TensorFlow, PyTorch, etc.) and their security implications.
Skills:
Deep understanding of AI/ML algorithms and models, and their security vulnerabilities.
Expertise in network and system security, including encryption, access controls, and authentication.
Strong problem-solving skills and ability to work under pressure to mitigate security risks in real-time.
Excellent communication skills, with the ability to explain complex security concepts to non-technical stakeholders.
Ability to collaborate effectively across teams, including AI/ML developers, infrastructure teams, and compliance officers.
Preferred Skills:
Experience with adversarial attacks and defences in AI/ML models.
Knowledge of ethical AI/ML practices and their implementation in secure environments.
Familiarity with AI/ML deployment on edge devices or in highly regulated environments.
Expertise in security tools and platforms for monitoring AI/ML systems.
Work Environment:
Full-time position with the possibility of remote work depending on the nature of the project.
Occasional travel to client sites or Phoenix ICT Solutions office for meetings and assessments may be required.
Ability to work in a fast-paced, dynamic environment with tight deadlines and high security standards.
Application Process: Interested candidates should submit a resume, along with a cover letter that outlines their experience in AI/ML security, compliance, and the relevant skills they bring to the position.