-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Track ownership of scale subresource #98377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track ownership of scale subresource #98377
Conversation
Hi @nodo. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource_test.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource_test.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource_test.go
Outdated
Show resolved
Hide resolved
/wg api-expression |
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource_test.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/subresource.go
Outdated
Show resolved
Hide resolved
It's very encouraging that we're finally on the right track for this bug! I think there are a lot of improvements we can make to the code so that we don't fall into various traps (and most, if not all, my comments are related to that), but it's great that we're going the right direction! |
I don't think registry should be importing the client, if you have to can you at least change the package name (add "_test") so that the import graph isn't weird? lgtm other than that. |
/retest |
@@ -58,7 +58,7 @@ type DeploymentStorage struct { | |||
} | |||
|
|||
// maps a group version to the replicas path in a deployment object | |||
var replicasPathInDeployment = fieldmanager.ResourcePathMappings{ | |||
var ReplicasPathInDeployment = fieldmanager.ResourcePathMappings{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making it public is fine, but it should be returned by a function so that people can't modify it. And let's move the test to integration, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, will amend the commit
staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/structuredmerge.go
Outdated
Show resolved
Hide resolved
f8a6ee7
to
41312b7
Compare
The last commit is still WIP, I will squash it with a previous one. |
- Test all versions to make sure each resource version is in the mappings - Fail when request info contains an unrecognized version. We have tests that guarantee that all known versions are in the mappings. If we get a version in request info that is not there we should fail fast to prevent inconsistent behaviour (e.g. for some reason the mappings is not up to date). Ensure all known versions are in mappings
This happens when a request changes the .status.replicas but not .spec.replicas
This is aligned to the behaviour of server-side apply on main resources.
This is to prevent the ScaleHandler to drop the entry. In this way entries just get ignored.
41312b7
to
5b666a6
Compare
Squashed the commit and rebased. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conversion change lgtm
@@ -165,6 +165,13 @@ func (c *crConverter) ConvertToVersion(in runtime.Object, target runtime.GroupVe | |||
// TODO: should this be a typed error? | |||
return nil, fmt.Errorf("%v is unstructured and is not suitable for converting to %q", fromGVK.String(), target) | |||
} | |||
// Special-case typed scale conversion if this custom resource supports a scale endpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Ah, I missed it, thank you :-) |
/retest Review the full test history for this PR. Silence the bot with an |
/retest |
2 similar comments
/retest |
/retest |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Track ownership of scale subresource for all scalable resources i.e. Deployment, ReplicaSet, StatefulSet, ReplicationController, and Custom Resources.
Which issue(s) this PR fixes:
Fixes #82046
Fixes #94585
Special notes for your reviewer:
I opened #95921 as an initial attempt to solve the problem. As this PR follows a different approach, I decided to open a new PR for clarity.
Does this PR introduce a user-facing change?: