Skip to content

Commit ebdd99c

Browse files
authored
Merge pull request #2318 from yliaog/automated-release-of-32.0.0a1-upstream-release-32.0-1736800223
Automated release of 32.0.0a1 upstream release 32.0 1736800223
2 parents bf1d581 + ac25d7c commit ebdd99c

18 files changed

+23
-21
lines changed

.github/workflows/e2e-master.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
runs-on: ubuntu-latest
1414
strategy:
1515
matrix:
16-
python-version: [3.7, 3.8, 3.9]
16+
python-version: ["3.8", "3.9", "3.10"]
1717
steps:
1818
- uses: actions/checkout@v4
1919
with:

.github/workflows/test.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ jobs:
77
runs-on: ubuntu-latest
88
strategy:
99
matrix:
10-
python-version: ["3.7", "3.8", "3.10", "3.11"]
10+
python-version: ["3.8", "3.10", "3.11", "3.12"]
1111
include:
1212
- python-version: "3.9"
1313
use_coverage: 'coverage'

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# v32.0.0+snapshot
1+
# v32.0.0a1
22

33
Kubernetes API Version: v1.32.0
44

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,7 @@ supported versions of Kubernetes clusters.
101101
- [client 29.y.z](https://pypi.org/project/kubernetes/29.0.0/): Kubernetes 1.28 or below (+-), Kubernetes 1.29 (✓), Kubernetes 1.30 or above (+-)
102102
- [client 30.y.z](https://pypi.org/project/kubernetes/30.1.0/): Kubernetes 1.29 or below (+-), Kubernetes 1.30 (✓), Kubernetes 1.31 or above (+-)
103103
- [client 31.y.z](https://pypi.org/project/kubernetes/31.0.0/): Kubernetes 1.30 or below (+-), Kubernetes 1.31 (✓), Kubernetes 1.32 or above (+-)
104+
- [client 32.y.z](https://pypi.org/project/kubernetes/32.0.0a1/): Kubernetes 1.31 or below (+-), Kubernetes 1.32 (✓), Kubernetes 1.33 or above (+-)
104105

105106

106107
> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.
@@ -168,6 +169,7 @@ between client-python versions.
168169
| 30.0 | Kubernetes main repo, 1.30 branch ||
169170
| 31.0 Alpha/Beta | Kubernetes main repo, 1.31 branch ||
170171
| 31.0 | Kubernetes main repo, 1.31 branch ||
172+
| 32.0 Alpha/Beta | Kubernetes main repo, 1.32 branch ||
171173

172174
> See [here](#homogenizing-the-kubernetes-python-client-versions) for an explanation of why there is no v13-v16 release.
173175
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
5f773c685cb5e7b97c1b3be4a7cff387a8077a4789c738dac715ba91b1c50eda
1+
b20f45563c8a98d4f6f0ccc8fea807c5a646510c40aee9183d87b0be22104194

kubernetes/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ No description provided (generated by Openapi Generator https://github.com/opena
44
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
55

66
- API version: release-1.32
7-
- Package version: 32.0.0+snapshot
7+
- Package version: 32.0.0a1
88
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
99

1010
## Requirements.

kubernetes/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
__project__ = 'kubernetes'
1616
# The version is auto-updated. Please do not edit.
17-
__version__ = "32.0.0+snapshot"
17+
__version__ = "32.0.0a1"
1818

1919
from . import client
2020
from . import config

kubernetes/client/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
from __future__ import absolute_import
1616

17-
__version__ = "32.0.0+snapshot"
17+
__version__ = "32.0.0a1"
1818

1919
# import apis into sdk package
2020
from kubernetes.client.api.well_known_api import WellKnownApi

kubernetes/client/api_client.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def __init__(self, configuration=None, header_name=None, header_value=None,
7878
self.default_headers[header_name] = header_value
7979
self.cookie = cookie
8080
# Set default User-Agent.
81-
self.user_agent = 'OpenAPI-Generator/32.0.0+snapshot/python'
81+
self.user_agent = 'OpenAPI-Generator/32.0.0a1/python'
8282
self.client_side_validation = configuration.client_side_validation
8383

8484
def __enter__(self):

kubernetes/client/configuration.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -354,7 +354,7 @@ def to_debug_report(self):
354354
"OS: {env}\n"\
355355
"Python Version: {pyversion}\n"\
356356
"Version of the API: release-1.32\n"\
357-
"SDK Package Version: 32.0.0+snapshot".\
357+
"SDK Package Version: 32.0.0a1".\
358358
format(env=sys.platform, pyversion=sys.version)
359359

360360
def get_host_settings(self):

kubernetes/client/models/v1alpha3_resource_claim_status.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ def devices(self, devices):
110110
def reserved_for(self):
111111
"""Gets the reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
112112
113-
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
113+
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
114114
115115
:return: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
116116
:rtype: list[V1alpha3ResourceClaimConsumerReference]
@@ -121,7 +121,7 @@ def reserved_for(self):
121121
def reserved_for(self, reserved_for):
122122
"""Sets the reserved_for of this V1alpha3ResourceClaimStatus.
123123
124-
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
124+
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
125125
126126
:param reserved_for: The reserved_for of this V1alpha3ResourceClaimStatus. # noqa: E501
127127
:type: list[V1alpha3ResourceClaimConsumerReference]

kubernetes/client/models/v1beta1_resource_claim_status.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ def devices(self, devices):
110110
def reserved_for(self):
111111
"""Gets the reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
112112
113-
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
113+
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
114114
115115
:return: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
116116
:rtype: list[V1beta1ResourceClaimConsumerReference]
@@ -121,7 +121,7 @@ def reserved_for(self):
121121
def reserved_for(self, reserved_for):
122122
"""Sets the reserved_for of this V1beta1ResourceClaimStatus.
123123
124-
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. # noqa: E501
124+
ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. # noqa: E501
125125
126126
:param reserved_for: The reserved_for of this V1beta1ResourceClaimStatus. # noqa: E501
127127
:type: list[V1beta1ResourceClaimConsumerReference]

kubernetes/docs/V1alpha3ResourceClaimStatus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Name | Type | Description | Notes
66
------------ | ------------- | ------------- | -------------
77
**allocation** | [**V1alpha3AllocationResult**](V1alpha3AllocationResult.md) | | [optional]
88
**devices** | [**list[V1alpha3AllocatedDeviceStatus]**](V1alpha3AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
9-
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
9+
**reserved_for** | [**list[V1alpha3ResourceClaimConsumerReference]**](V1alpha3ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]
1010

1111
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
1212

kubernetes/docs/V1beta1ResourceClaimStatus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Name | Type | Description | Notes
66
------------ | ------------- | ------------- | -------------
77
**allocation** | [**V1beta1AllocationResult**](V1beta1AllocationResult.md) | | [optional]
88
**devices** | [**list[V1beta1AllocatedDeviceStatus]**](V1beta1AllocatedDeviceStatus.md) | Devices contains the status of each device allocated for this claim, as reported by the driver. This can include driver-specific information. Entries are owned by their respective drivers. | [optional]
9-
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 32 such reservations. This may get increased in the future, but not reduced. | [optional]
9+
**reserved_for** | [**list[V1beta1ResourceClaimConsumerReference]**](V1beta1ResourceClaimConsumerReference.md) | ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated. In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled. Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again. There can be at most 256 such reservations. This may get increased in the future, but not reduced. | [optional]
1010

1111
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
1212

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy