Skip to content

internal/scorecard/alpha: add parallelism support #3434

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions changelog/fragments/scorecard-parallel.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
entries:
- description: >
Changes the scorecard configuration format to include a new top-level
`stages` field, which allows users to define stages of tests and to
enable parallelism within each stage.

kind: "change"

# Is this a breaking change?
breaking: true

# NOTE: ONLY USE `pull_request_override` WHEN ADDING THIS
# FILE FOR A PREVIOUSLY MERGED PULL_REQUEST!
#
# The generator auto-detects the PR number from the commit
# message in which this file was originally added.
#
# What is the pull request number (without the "#")?
# pull_request_override: 0


# Migration can be defined to automatically add a section to
# the migration guide. This is required for breaking changes.
migration:
header: Add stages to scorecard configuration file
body: |
Update your scorecard configuration file to contain a new top-level
stages field, and move your test definitions under one or more stages.

**Example**

```yaml
tests:
- image: quay.io/operator-framework/scorecard-test:dev
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
```

should be converted to

```yaml
stages:
- tests:
- image: quay.io/operator-framework/scorecard-test:dev
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
```
2 changes: 1 addition & 1 deletion internal/registry/labels.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import (
registrybundle "github.com/operator-framework/operator-registry/pkg/lib/bundle"
log "github.com/sirupsen/logrus"

// TODO: replace `gopkg.in/yaml.v2` with `sigs.k8s.io/yaml` once operator-registry has `json` tags in the
// TODO: replace `gopkg.in/yaml.v3` with `sigs.k8s.io/yaml` once operator-registry has `json` tags in the
// annotations struct.
yaml "gopkg.in/yaml.v3"
)
Expand Down
7 changes: 6 additions & 1 deletion internal/scorecard/alpha/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@ const (
ConfigDirPath = "/tests/" + ConfigDirName + "/"
)

type Stage struct {
Parallel bool `yaml:"parallel"`
Tests []Test `yaml:"tests"`
}

type Test struct {
// Image is the name of the testimage
Image string `json:"image"`
Expand All @@ -37,7 +42,7 @@ type Test struct {
// Config represents the set of test configurations which scorecard
// would run based on user input
type Config struct {
Tests []Test `yaml:"tests"`
Stages []Stage `yaml:"stages"`
}

// LoadConfig will find and return the scorecard config, the config file
Expand Down
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
tests:
- image: quay.io/username/custom-scorecard-tests:dev
entrypoint:
- custom-scorecard-tests
- customtest1
labels:
suite: custom
test: customtest1
- image: quay.io/username/custom-scorecard-tests:dev
entrypoint:
- custom-scorecard-tests
- customtest2
labels:
suite: custom
test: customtest2
stages:
- tests:
- image: quay.io/username/custom-scorecard-tests:dev
entrypoint:
- custom-scorecard-tests
- customtest1
labels:
suite: custom
test: customtest1
- image: quay.io/username/custom-scorecard-tests:dev
entrypoint:
- custom-scorecard-tests
- customtest2
labels:
suite: custom
test: customtest2
20 changes: 10 additions & 10 deletions internal/scorecard/alpha/formatting.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ func (r PodTestRunner) getTestStatus(ctx context.Context, p *v1.Pod) (output *v1
// run based on user selection
func (o Scorecard) List() v1alpha3.TestList {
output := v1alpha3.NewTestList()
tests := o.selectTests()

for _, test := range tests {
item := v1alpha3.NewTest()
item.Spec = v1alpha3.TestSpec{
Image: test.Image,
Entrypoint: test.Entrypoint,
Labels: test.Labels,
for _, stage := range o.Config.Stages {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to have duplicate tests across stages? I suspect this wouldn't be common if so.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know, there is nothing that would prevent it. Scorecard would happily run it twice and report the results twice. It would probably also handle duplicate tests in a single stage the same way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running duplicates is obviously fine. Test names listed may be duplicates though, which looks a little weird.

tests := o.selectTests(stage)
for _, test := range tests {
item := v1alpha3.NewTest()
item.Spec = v1alpha3.TestSpec{
Image: test.Image,
Entrypoint: test.Entrypoint,
Labels: test.Labels,
}
output.Items = append(output.Items, item)
Copy link
Contributor

@camilamacedo86 camilamacedo86 Jul 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually nor happy with for inside to another for, however, I think it will not have too many items so 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the only way this works 😄

This isn't an O(n^2) problem like it seems like you're referring to. Stages are just groups of tests. This function is (generally) O(number of tests) regardless of whether stages are introduced or not.

}
output.Items = append(output.Items, item)
}

return output
}
99 changes: 50 additions & 49 deletions internal/scorecard/alpha/labels_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ func TestEmptySelector(t *testing.T) {
return
}

tests := o.selectTests()
tests := o.selectTests(o.Config.Stages[0])
testsSelected := len(tests)
if testsSelected != c.testsSelected {
t.Errorf("Wanted testsSelected %d, got: %d", c.testsSelected, testsSelected)
Expand All @@ -65,51 +65,52 @@ func TestEmptySelector(t *testing.T) {
}
}

const testConfig = `tests:
- image: quay.io/someuser/customtest1:v0.0.1
entrypoint:
- custom-test
labels:
suite: custom
test: customtest1
- image: quay.io/someuser/customtest2:v0.0.1
entrypoint:
- custom-test
labels:
suite: custom
test: customtest2
- image: quay.io/redhat/basictests:v0.0.1
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
- image: quay.io/redhat/basictests:v0.0.1
entrypoint:
- scorecard-test
- basic-check-status
labels:
suite: basic
test: basic-check-status-test
- image: quay.io/redhat/olmtests:v0.0.1
entrypoint:
- scorecard-test
- olm-bundle-validation
labels:
suite: olm
test: olm-bundle-validation-test
- image: quay.io/redhat/olmtests:v0.0.1
entrypoint:
- scorecard-test
- olm-crds-have-validation
labels:
suite: olm
test: olm-crds-have-validation-test
- image: quay.io/redhat/kuttltests:v0.0.1
labels:
suite: kuttl
entrypoint:
- kuttl-test
- olm-status-descriptors
`
const testConfig = `stages:
- tests:
- image: quay.io/someuser/customtest1:v0.0.1
entrypoint:
- custom-test
labels:
suite: custom
test: customtest1
- image: quay.io/someuser/customtest2:v0.0.1
entrypoint:
- custom-test
labels:
suite: custom
test: customtest2
- image: quay.io/redhat/basictests:v0.0.1
entrypoint:
- scorecard-test
- basic-check-spec
labels:
suite: basic
test: basic-check-spec-test
- image: quay.io/redhat/basictests:v0.0.1
entrypoint:
- scorecard-test
- basic-check-status
labels:
suite: basic
test: basic-check-status-test
- image: quay.io/redhat/olmtests:v0.0.1
entrypoint:
- scorecard-test
- olm-bundle-validation
labels:
suite: olm
test: olm-bundle-validation-test
- image: quay.io/redhat/olmtests:v0.0.1
entrypoint:
- scorecard-test
- olm-crds-have-validation
labels:
suite: olm
test: olm-crds-have-validation-test
- image: quay.io/redhat/kuttltests:v0.0.1
labels:
suite: kuttl
entrypoint:
- kuttl-test
- olm-status-descriptors
`
92 changes: 91 additions & 1 deletion internal/scorecard/alpha/run_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ package alpha

import (
"context"
"errors"
"path/filepath"
"testing"
"time"
Expand All @@ -25,8 +26,8 @@ import (
"github.com/operator-framework/operator-sdk/pkg/apis/scorecard/v1alpha3"
)

// TODO(joelanford): rewrite to use ginkgo/gomega
func TestRunTests(t *testing.T) {

cases := []struct {
name string
configPathValue string
Expand Down Expand Up @@ -100,3 +101,92 @@ func TestRunTests(t *testing.T) {

}
}

// TODO(joelanford): rewrite to use ginkgo/gomega
func TestRunParallelPass(t *testing.T) {
scorecard := getFakeScorecard(true)
ctx, cancel := context.WithTimeout(context.Background(), 7*time.Millisecond)
defer cancel()

tests, err := scorecard.Run(ctx)
if err != nil {
t.Fatalf("Expected no error, got error: %v", err)
}
if len(tests.Items) != 2 {
t.Fatalf("Expected 2 tests, got %d", len(tests.Items))
}
for _, test := range tests.Items {
expectPass(t, test)
}
}

// TODO(joelanford): rewrite to use ginkgo/gomega
func TestRunSequentialPass(t *testing.T) {
scorecard := getFakeScorecard(false)
ctx, cancel := context.WithTimeout(context.Background(), 12*time.Millisecond)
defer cancel()

tests, err := scorecard.Run(ctx)
if err != nil {
t.Fatalf("Expected no error, got error: %v", err)
}
if len(tests.Items) != 2 {
t.Fatalf("Expected 2 tests, got %d", len(tests.Items))
}
for _, test := range tests.Items {
expectPass(t, test)
}
}

// TODO(joelanford): rewrite to use ginkgo/gomega
func TestRunSequentialFail(t *testing.T) {
scorecard := getFakeScorecard(false)

ctx, cancel := context.WithTimeout(context.Background(), 7*time.Millisecond)
defer cancel()

_, err := scorecard.Run(ctx)
if !errors.Is(err, context.DeadlineExceeded) {
t.Fatalf("Expected deadline exceeded error, got: %v", err)
}
}

func getFakeScorecard(parallel bool) Scorecard {
return Scorecard{
Config: Config{
Stages: []Stage{
{
Parallel: parallel,
Tests: []Test{
{},
{},
},
},
},
},
TestRunner: FakeTestRunner{
Sleep: 5 * time.Millisecond,
TestStatus: &v1alpha3.TestStatus{
Results: []v1alpha3.TestResult{
{
State: v1alpha3.PassState,
},
},
},
},
}
}

func expectPass(t *testing.T, test v1alpha3.Test) {
if len(test.Status.Results) != 1 {
t.Fatalf("Expected 1 results, got %d", len(test.Status.Results))
}
for _, r := range test.Status.Results {
if len(r.Errors) > 0 {
t.Fatalf("Expected no errors, got %v", r.Errors)
}
if r.State != v1alpha3.PassState {
t.Fatalf("Expected result state %q, got %q", v1alpha3.PassState, r.State)
}
}
}
Loading