Skip to content

Workstream 1 RFC: Establishing Trust Chain levels for complete AI supply chain #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sudo-biscuit opened this issue Nov 18, 2024 · 4 comments
Labels

Comments

@sudo-biscuit
Copy link

Authors:
John Stone

Summary:
This Request for Comments (RFC) aims to establish a methodology/framework for establishing and assessing a 'Trust Chain' related to AI supply chain. To reason about the entire chain it is proposed to think about it across four elements that would make up the supply chain/Trust chain :

  • Data
  • Infrastructure
  • Application
  • Model

The aim would be to create 'SLSA' style levels of assurance for the 4 different components that would then create the Trust chain. For example below are some thoughts on how levels could look but not yet fully worked out :

  • Data : Level 0 'Data bill of materials' is present
  • Infrastructure : Level 1 Could be covered by form of certification ie ISO 27001 and a Level 3 could have specific mitigations to models for example protection against side channel attacks on GPUs
  • Application : Would be covered by the SLSA levels
  • Model : Level 0 Signed model , Level 1 signed model + model lineage

Each level could score 5 points and aim would be to have 4 Levels.
I could then get a scoring of the 'Trust Chain' for the model something along the lines of the above example would be Data L0 = 5 + Infra L3 = 20 + Application L2 = 15 + Model L1 = 10
Trust of Chain score = 50 out possible 80 Note Example scoring values still to be determined

Scope
As the methodology/framework does not yet exist in a complete form it would be to create a phased approach. Phase 1 would be to focus on establishing criteria for 'SLSA' levels for the model which we can then propose as part of the paper
Priority:
P1 - Perquisites would need to be met in the form of defining common terms which will be crucial for the workstream's progress, as it will be repeatedly used and referenced.

Level of Effort:
Medium : Definition of levels may be more complex then assumed
Definition Agreement: Moderate - Reaching a consensus on precise definitions may require more discussion and refinement.
Drawbacks: Another framework
Initial Process: The process of defining the levels may be perceived as tedious, especially during the initial stages and will probably require a few iterations.
Adoption Time: For the entire chain it could be a lengthy process whereby first phase around model levels could be established and adopted in more near term
Alternatives: None currently exist
Reference Material & Prior Art
https://slsa.dev

@Akilsrin
Copy link

Akilsrin commented Jan 2, 2025

hi, reading through the various RFCs, it seems this has some overlap with #4. this could possibly be an umbrella project with "Signing ML Artifacts: Building towards tamper-proof ML metadata records" as a sub project.

@mihaimaruseac
Copy link

I'd like to be involved in this, given #4 and the work done on https://github.com/sigstore/model-transparency for both model signing and SLSA

@andrewelizondo
Copy link
Contributor

andrewelizondo commented Mar 25, 2025

This RFC could also gain inspiration from the Model Openness Framework proposed by the LF AI project.

https://docs.google.com/document/d/1RUNrs4flAsYsikXTPu1jWBH1BAumCyeG/edit?pli=1&tab=t.0

Security was intentionally out of scope and could be incorporated here in a similar class/level structure.

12	Out of Scope
The MOF is not designed to solve all issues related to AI and openness and relies heavily on the community to be transparent and honest in the reporting of the components they release and the licenses applied to each. 
The MOF does not intend to address any of the following as they are best addressed through alternative methods, other industry activities or the courts:
- Bias and fairness
- AI safety
- Trustworthiness
- Performance testing
- Red-teaming
- Security and privacy
- Components related to model serving
- Model provenance

@mihaimaruseac
Copy link

+1 to integrating ideas from MOF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants