You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
This Request for Comments (RFC) aims to establish a methodology/framework for establishing and assessing a 'Trust Chain' related to AI supply chain. To reason about the entire chain it is proposed to think about it across four elements that would make up the supply chain/Trust chain :
Data
Infrastructure
Application
Model
The aim would be to create 'SLSA' style levels of assurance for the 4 different components that would then create the Trust chain. For example below are some thoughts on how levels could look but not yet fully worked out :
Data : Level 0 'Data bill of materials' is present
Infrastructure : Level 1 Could be covered by form of certification ie ISO 27001 and a Level 3 could have specific mitigations to models for example protection against side channel attacks on GPUs
Application : Would be covered by the SLSA levels
Model : Level 0 Signed model , Level 1 signed model + model lineage
Each level could score 5 points and aim would be to have 4 Levels.
I could then get a scoring of the 'Trust Chain' for the model something along the lines of the above example would be Data L0 = 5 + Infra L3 = 20 + Application L2 = 15 + Model L1 = 10
Trust of Chain score = 50 out possible 80 Note Example scoring values still to be determined
Scope
As the methodology/framework does not yet exist in a complete form it would be to create a phased approach. Phase 1 would be to focus on establishing criteria for 'SLSA' levels for the model which we can then propose as part of the paper Priority:
P1 - Perquisites would need to be met in the form of defining common terms which will be crucial for the workstream's progress, as it will be repeatedly used and referenced.
Level of Effort:
Medium : Definition of levels may be more complex then assumed
Definition Agreement: Moderate - Reaching a consensus on precise definitions may require more discussion and refinement.
Drawbacks: Another framework
Initial Process: The process of defining the levels may be perceived as tedious, especially during the initial stages and will probably require a few iterations.
Adoption Time: For the entire chain it could be a lengthy process whereby first phase around model levels could be established and adopted in more near term
Alternatives: None currently exist Reference Material & Prior Art https://slsa.dev
The text was updated successfully, but these errors were encountered:
hi, reading through the various RFCs, it seems this has some overlap with #4. this could possibly be an umbrella project with "Signing ML Artifacts: Building towards tamper-proof ML metadata records" as a sub project.
Security was intentionally out of scope and could be incorporated here in a similar class/level structure.
12 Out of Scope
The MOF is not designed to solve all issues related to AI and openness and relies heavily on the community to be transparent and honest in the reporting of the components they release and the licenses applied to each.
The MOF does not intend to address any of the following as they are best addressed through alternative methods, other industry activities or the courts:
- Bias and fairness
- AI safety
- Trustworthiness
- Performance testing
- Red-teaming
- Security and privacy
- Components related to model serving
- Model provenance
Authors:
John Stone
Summary:
This Request for Comments (RFC) aims to establish a methodology/framework for establishing and assessing a 'Trust Chain' related to AI supply chain. To reason about the entire chain it is proposed to think about it across four elements that would make up the supply chain/Trust chain :
The aim would be to create 'SLSA' style levels of assurance for the 4 different components that would then create the Trust chain. For example below are some thoughts on how levels could look but not yet fully worked out :
Each level could score 5 points and aim would be to have 4 Levels.
I could then get a scoring of the 'Trust Chain' for the model something along the lines of the above example would be Data L0 = 5 + Infra L3 = 20 + Application L2 = 15 + Model L1 = 10
Trust of Chain score = 50 out possible 80 Note Example scoring values still to be determined
Scope
As the methodology/framework does not yet exist in a complete form it would be to create a phased approach. Phase 1 would be to focus on establishing criteria for 'SLSA' levels for the model which we can then propose as part of the paper
Priority:
P1 - Perquisites would need to be met in the form of defining common terms which will be crucial for the workstream's progress, as it will be repeatedly used and referenced.
Level of Effort:
Medium : Definition of levels may be more complex then assumed
Definition Agreement: Moderate - Reaching a consensus on precise definitions may require more discussion and refinement.
Drawbacks: Another framework
Initial Process: The process of defining the levels may be perceived as tedious, especially during the initial stages and will probably require a few iterations.
Adoption Time: For the entire chain it could be a lengthy process whereby first phase around model levels could be established and adopted in more near term
Alternatives: None currently exist
Reference Material & Prior Art
https://slsa.dev
The text was updated successfully, but these errors were encountered: