Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(errors): throw a useful error for invalid manifest configuration #4452

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mattgogerly
Copy link
Member

Throw a useful error message (and short circuit) if a stage context has both manifestArtifactId and manifestArtifactAccount set, rather than a generic 404 from Clouddriver.

@mattgogerly mattgogerly requested a review from dbyron-sf May 2, 2023 15:41
@dbyron-sf
Copy link
Contributor

I suspect this is correct, but I could use a bit more info. Can you link to the place in clouddriver that responds with 404 today? Is there a higher-level test that would demonstrate this more clearly?

@mattgogerly
Copy link
Member Author

mattgogerly commented May 2, 2023

I suspect this is correct, but I could use a bit more info. Can you link to the place in clouddriver that responds with 404 today? Is there a higher-level test that would demonstrate this more clearly?

Difficult to test without a full integration test setup between Orca and Clouddriver, but

ArtifactUtils.withAccount(artifact, context.getManifestArtifactAccount()))
forces the Artifact definition to use whatever account is specified, even if it's invalid.

The ArtifactDownloadHandler in Clouddriver searches for the combination of account and artifact type and throws a 404 if matching credentials aren't found https://github.com/spinnaker/clouddriver/blob/546c334848663f0a8632d56170d39177542ad4eb/clouddriver-artifacts/src/main/java/com/netflix/spinnaker/clouddriver/artifacts/ArtifactDownloader.java#L39 which surfaces as a 404 to Clouddriver in Deck without any error message.

The easiest way to replicate this is to create a pipeline with a Bake Manifest stage and a Deploy stage that uses the artifact from the bake, then edit the stage JSON to add "manifestArtifactAccount": "anything".

If you create pipelines purely from the UI this isn't an issue because Deck removes it when you select the artifact, but that doesn't help for pipelines created via Gate.

@dbyron-sf
Copy link
Contributor

Fair enough. I think we need something additional/different to get the tests to pass. And it's probably not fair to expect more, but this all feels pretty delicate. Like, is it time to remove the code just below:

            // Once the legacy artifacts feature is removed, all trigger expected artifacts will be
            // required to define an account up front.
            .map(
                artifact ->
                    ArtifactUtils.withAccount(artifact, context.getManifestArtifactAccount()))
            .orElseThrow(() -> new IllegalArgumentException("No manifest artifact was specified."));

It doesn't make sense to do this if the artifact came via manifestArtifactId right?

@mattgogerly
Copy link
Member Author

Fair enough. I think we need something additional/different to get the tests to pass. And it's probably not fair to expect more, but this all feels pretty delicate. Like, is it time to remove the code just below:

            // Once the legacy artifacts feature is removed, all trigger expected artifacts will be
            // required to define an account up front.
            .map(
                artifact ->
                    ArtifactUtils.withAccount(artifact, context.getManifestArtifactAccount()))
            .orElseThrow(() -> new IllegalArgumentException("No manifest artifact was specified."));

It doesn't make sense to do this if the artifact came via manifestArtifactId right?

That was my first thought but I don't have the background on legacy artifacts, and I really don't want to be the one to break artifact binding 😬

If we're all happy I'm open to changing this PR to just remove that map which would also prevent the 404.

@dbyron-sf
Copy link
Contributor

I'm not sure I do either, at least not without a pretty deep dive. Perhaps https://spinnaker.io/docs/reference/artifacts-legacy/in-kubernetes-v2/ is enlightening? Maybe @jasonmcintosh or @clanesf knows?

Maybe a first step is to get PR checks to pass? Perhaps we can convince ourselves to remove at least that map (and maybe more) with a feature flag that leaves the current behavior by default?

@mattgogerly
Copy link
Member Author

mattgogerly commented May 4, 2023

The test that's failing is actually demonstrating the issue which is interesting... I think this might just be a case of an invalid test.

It mocks a 200 from Clouddriver, but as explained above that will only ever happen if the artifact account specified in the stage context happens to match the account of the actual artifact.

def stage = new StageExecutionImpl(PipelineExecutionImpl.newPipeline("test"), "runJob", [
  credentials: "abc",
  cloudProvider: "kubernetes",
  source: "artifact",
  manifestArtifactId: "foo",
  manifestArtifactAccount: "bar",
])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants