This repository provides the reference list for the emerging term "AI Act Engineering".
EU AI Act regulates the development, deployment, and use of AI systems within the European Union. It aims to promote trustworthy AI by mitigating risks and ensuring fundamental rights are protected.
We define the term "AI Act Engineering" as a set of engineering practices, processes, and methodologies needed to develop and deploy AI systems that comply with the requirements of the proposed European Union (EU) AI Act.
Engineering for the EU AI Act Compliance involves the following aspects:
- Risk Assessment: Classifying AI systems based on their risk level (unacceptable, high, limited, low) as defined by the EU AI Act.
- Mitigating High Risks: For high-risk systems, implementing safeguards like robust data management, human oversight mechanisms, and explainable AI techniques. Engineering AI systems under the EU AI Act would require robust testing and validation to ensure that they meet safety, accuracy, and reliability standards. This might include implementing and documenting extensive risk assessments, mitigation strategies, and quality control measures.
- Documentation and Transparency: Properly documenting the AI system's development process and ensuring a level of transparency appropriate for the risk level. Developers would need to maintain detailed records of AI training data, algorithms, and processes to meet transparency requirements. This documentation would be crucial for audits and for providing explanations of AI system decisions when necessary.
- The AI Engineer's Guide to Surviving the EU AI Act. By Larysa Visengeriyeva
- A Machine Learning Engineer’s Guide To The AI Act
- AI safety
- AI Trust Lab: Engineering for Trustworthy AI (CMU)
- ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) 1.Guidelines for secure AI system development