Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 3 additions & 7 deletions modules/virt-adding-kernel-arguments-enable-iommu.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,9 @@ spec:
- intel_iommu=on
# ...
----
where:

<apiversion>:: Applies the new kernel argument only to worker nodes.

<name>:: Indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as `amd_iommu=on`.

<intel_iommu=o>:: Identifies the kernel argument as `intel_iommu` for an Intel CPU.
** `metadata.labels.machineconfiguration.openshift.io/role` specifies that the new kernel argument is applied only to worker nodes.
** `metadata.name` specifies the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as `amd_iommu=on`.
** `spec.kernelArguments` specifies the kernel argument as `intel_iommu` for an Intel CPU.

. Create the new `MachineConfig` object:
+
Expand Down
5 changes: 1 addition & 4 deletions modules/virt-assigning-pci-device-virtual-machine.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,7 @@ spec:
- deviceName: nvidia.com/TU104GL_Tesla_T4
name: hostdevices1
----
+
where:
+
`deviceName`:: Specifies the name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
** `spec.template.spec.domain.devices.hostDevices.deviceName` specifies the name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

.Verification

Expand Down
26 changes: 14 additions & 12 deletions modules/virt-attaching-vm-to-primary-udn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,19 @@
//
// * virt/vm_networking/virt-connecting-vm-to-primary-udn.adoc

:_mod-docs-content-type: PROCEDURE
[id="virt-attaching-vm-to-primary-udn_{context}"]
:_mod-docs-content-type: PROCEDURE
[id="virt-attaching-vm-to-primary-udn_{context}"]
= Attaching a virtual machine to the primary user-defined network by using the CLI

[role="_abstract"]
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the CLI.

.Prerequisites
* You have installed the OpenShift CLI (`oc`).

* You have installed the {oc-first}.

.Procedure

. Edit the `VirtualMachine` manifest to add the UDN interface details, as in the following example:
+
Example `VirtualMachine` manifest:
Expand All @@ -23,26 +25,26 @@ apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm
namespace: my-namespace # <1>
namespace: my-namespace
spec:
template:
spec:
domain:
devices:
interfaces:
- name: udn-l2-net # <2>
binding:
name: l2bridge # <3>
- name: udn-l2-net
binding:
name: l2bridge
# ...
networks:
- name: udn-l2-net # <4>
- name: udn-l2-net
pod: {}
# ...
----
<1> The namespace in which the VM is located. This value must match the namespace in which the UDN is defined.
<2> The name of the user-defined network interface.
<3> The name of the binding plugin that is used to connect the interface to the VM. The possible values are `l2bridge` and `passt`. The default value is `l2bridge`.
<4> The name of the network. This must match the value of the `spec.template.spec.domain.devices.interfaces.name` field.
** `metadata.namespace` specifies the namespace in which the VM is located. This value must match the namespace in which the UDN is defined.
** `spec.template.spec.domain.devices.interfaces.name` specifies the name of the user-defined network interface.
** `spec.template.spec.domain.devices.interfaces.binding.name` specifies the name of the binding plugin that is used to connect the interface to the VM. The possible values are `l2bridge` and `passt`. The default value is `l2bridge`.
** `spec.template.spec.networks.name` specifies the name of the network. This must match the value of the `spec.template.spec.domain.devices.interfaces.name` field.

. Optional: If you are using the Plug a Simple Socket Transport (passt) network binding plugin, set the `hco.kubevirt.io/deployPasstNetworkBinding` annotation to `true` in the `HyperConverged` custom resource (CR) by running the following command:
+
Expand Down
19 changes: 11 additions & 8 deletions modules/virt-attaching-vm-to-sriov-network.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,22 +27,25 @@ spec:
domain:
devices:
interfaces:
- name: nic1 <1>
- name: nic1
sriov: {}
networks:
- name: nic1 <2>
- name: nic1
multus:
networkName: sriov-network <3>
networkName: sriov-network
# ...
----
<1> Specify a unique name for the SR-IOV interface.
<2> Specify the name of the SR-IOV interface. This must be the same as the `interfaces.name` that you defined earlier.
<3> Specify the name of the SR-IOV network attachment definition.
** `spec.template.spec.domain.devices.interfaces.name` specifies a unique name for the SR-IOV interface.
** `spec.template.spec.networks.name` specifies the name of the SR-IOV interface. This must be the same as the `interfaces.name` that you defined earlier.
** `spec.template.spec.networks.multus.networkName` specifies the name of the SR-IOV network attachment definition.

. Apply the virtual machine configuration:
+
[source,terminal]
----
$ oc apply -f <vm_sriov>.yaml <1>
$ oc apply -f <vm_sriov>.yaml
----
<1> The name of the virtual machine YAML file.
+
where:
+
`<vm_sriov>`:: Specifies the name of the virtual machine YAML file.
16 changes: 8 additions & 8 deletions modules/virt-autoupdate-custom-bootsource.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,24 +37,24 @@ spec:
- metadata:
name: centos-stream9-image-cron
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true" <1>
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
spec:
schedule: "0 */12 * * *" <2>
schedule: "0 */12 * * *"
template:
spec:
source:
registry: <3>
registry:
url: docker://quay.io/containerdisks/centos-stream:9
storage:
resources:
requests:
storage: 30Gi
garbageCollect: Outdated
managedDataSource: centos-stream9 <4>
managedDataSource: centos-stream9
----
<1> This annotation is required for storage classes with `volumeBindingMode` set to `WaitForFirstConsumer`.
<2> Schedule for the job specified in cron format.
<3> Use to create a data volume from a registry source. Use the default `pod` `pullMethod` and not `node` `pullMethod`, which is based on the `node` docker cache. The `node` docker cache is useful when a registry image is available via `Container.Image`, but the CDI importer is not authorized to access it.
<4> For the custom image to be detected as an available boot source, the name of the image's `managedDataSource` must match the name of the template's `DataSource`, which is found under `spec.dataVolumeTemplates.spec.sourceRef.name` in the VM template YAML file.
** `spec.dataImportCronTemplates.metadata.annotations` specifies a required annotation for storage classes with `volumeBindingMode` set to `WaitForFirstConsumer`.
** `spec.dataImportCronTemplates.spec.schedule` specifies the schedule for the job, specified in cron format.
** `spec.dataImportCronTemplates.spec.template.spec.source.registry` specifies the registry source to use to create a data volume. Use the default `pod` `pullMethod` and not `node` `pullMethod`, which is based on the `node` docker cache. The `node` docker cache is useful when a registry image is available via `Container.Image`, but the CDI importer is not authorized to access it.
** `spec.dataImportCronTemplates.spec.managedDataSource` specifies the name of the managed data source. For the custom image to be detected as an available boot source, the name of the image's `managedDataSource` must match the name of the template's `DataSource`, which is found under `spec.dataVolumeTemplates.spec.sourceRef.name` in the VM template YAML file.

. Save the file.
11 changes: 7 additions & 4 deletions modules/virt-binding-devices-vfio-driver.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,12 @@ To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values
The `MachineConfig` Operator generates the `/etc/modprobe.d/vfio.conf` on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.

.Prerequisites

* You added kernel arguments to enable IOMMU for the CPU.
* You have installed the {oc-first}.

.Procedure

. Run the `lspci` command to obtain the `vendor-ID` and the `device-ID` for the PCI device.
+
[source,terminal]
Expand Down Expand Up @@ -46,7 +48,7 @@ version: {product-version}.0
metadata:
name: 100-worker-vfiopci
labels:
machineconfiguration.openshift.io/role: worker <1>
machineconfiguration.openshift.io/role: worker
storage:
files:
- path: /etc/modprobe.d/vfio.conf
Expand All @@ -61,9 +63,9 @@ storage:
contents:
inline: vfio-pci
----
<1> Applies the new kernel argument only to worker nodes.
<2> Specify the previously determined `vendor-ID` value (`10de`) and the `device-ID` value (`1eb8`) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
<3> The file that loads the vfio-pci kernel module on the worker nodes.
** `metadata.labels.machineconfiguration.openshift.io/role: worker` specifies that the new kernel argument is applied only to worker nodes.
** `storage.files.contents.inline`, where the path is `/etc/modprobe.d/vfio.conf`, specifies the previously determined `vendor-ID` value (`10de`) and the `device-ID` value (`1eb8`) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
** `storage.files.path`, where the `contents.inline` is `vfio-pci`, specifies the file that loads the `vfio-pci` kernel module on the worker nodes.

. Use Butane to generate a `MachineConfig` object file, `100-worker-vfiopci.yaml`, containing the configuration to be delivered to the worker nodes:
+
Expand Down Expand Up @@ -102,6 +104,7 @@ NAME GENERATEDBYCONTROLLER IGNI
----

.Verification

* Verify that the VFIO driver is loaded.
+
[source,terminal]
Expand Down
18 changes: 8 additions & 10 deletions modules/virt-configuring-storage-class-bootsource-update.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,21 +44,19 @@ spec:
template:
spec:
storage:
storageClassName: <storage_class> <1>
schedule: "0 */12 * * *" <2>
managedDataSource: <data_source> <3>
storageClassName: <storage_class>
schedule: "0 */12 * * *"
managedDataSource: <data_source>
# ...
----
<1> Define the storage class.
<2> Required: Schedule for the job specified in cron format.
<3> Required: The data source to use.
** `spec.dataImportCronTemplates.spec.template.spec.storage.storageClassName` specifies the storage class.
** `spec.dataImportCronTemplates.spec.schedule` is a required field that specifies the schedule for the job in cron format.
** `spec.dataImportCronTemplates.spec.managedDataSource` is a required field that specifies the data source to use.
+
--
[NOTE]
----
====
For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.
----
--
====

. Wait for the HyperConverged Operator (HCO) and Scheduling, Scale, and Performance (SSP) resources to complete reconciliation.

Expand Down
26 changes: 14 additions & 12 deletions modules/virt-creating-a-primary-cluster-udn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,12 @@
You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI.

.Prerequisites

* You have access to the cluster as a user with `cluster-admin` privileges.
* You have installed the {oc-first}.

.Procedure

. Create a `ClusterUserDefinedNetwork` object to specify the custom network configuration.
+
Example `ClusterUserDefinedNetwork` manifest:
Expand All @@ -23,28 +25,28 @@ Example `ClusterUserDefinedNetwork` manifest:
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
name: cudn-l2-net # <1>
name: cudn-l2-net
spec:
namespaceSelector: # <2>
matchExpressions: # <3>
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In # <4>
operator: In
values: ["red-namespace", "blue-namespace"]
network:
topology: Layer2 # <5>
topology: Layer2
layer2:
role: Primary # <6>
role: Primary
ipam:
lifecycle: Persistent
subnets:
- 203.203.0.0/16
----
<1> Specifies the name of the `ClusterUserDefinedNetwork` custom resource.
<2> Specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to `default`, an `openshift-*` namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO).
<3> Specifies the type of selector. In this example, the `matchExpressions` selector selects objects that have the label `kubernetes.io/metadata.name` with the value `red-namespace` or `blue-namespace`.
<4> Specifies the type of operator. Possible values are `In`, `NotIn`, and `Exists`.
<5> Specifies the topological configuration of the network. The required value is `Layer2`. A `Layer2` topology creates a logical switch that is shared by all nodes.
<6> Specifies whether the UDN is primary or secondary. The `Primary` role means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
** `metadata.name` specifies the name of the `ClusterUserDefinedNetwork` custom resource.
** `spec.namespaceSelector` specifies the set of namespaces that the cluster UDN applies to. The namespace selector must not point to `default`, an `openshift-*` namespace, or any global namespaces that are defined by the Cluster Network Operator (CNO).
** `spec.namespaceSelector.matchExpressions` specifies the type of selector. In this example, the `matchExpressions` selector selects objects that have the label `kubernetes.io/metadata.name` with the value `red-namespace` or `blue-namespace`.
** `spec.namespaceSelector.matchExpressions.operator` specifies the type of operator. Possible values are `In`, `NotIn`, and `Exists`.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] Vale.Terms: Use 'Operators?' instead of 'operator'.

** `spec.network.topology` specifies the topological configuration of the network. The required value is `Layer2`. A `Layer2` topology creates a logical switch that is shared by all nodes.
** `spec.network.layer2.role` specifies whether the UDN is primary or secondary. The `Primary` role means that the UDN acts as the primary network for the VM and all default traffic passes through this network.

. Apply the `ClusterUserDefinedNetwork` manifest by running the following command:
+
Expand Down
32 changes: 17 additions & 15 deletions modules/virt-creating-a-primary-udn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,20 @@
//
// * virt/vm_networking/virt-connecting-vm-to-primary-udn.adoc

:_mod-docs-content-type: PROCEDURE
[id="virt-creating-a-primary-udn_{context}"]
:_mod-docs-content-type: PROCEDURE
[id="virt-creating-a-primary-udn_{context}"]
= Creating a primary namespace-scoped user-defined network by using the CLI

[role="_abstract"]
You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support.

.Prerequisites

* You have installed the {oc-first}.
* You have created a namespace and applied the `k8s.ovn.org/primary-user-defined-network` label.

.Procedure

. Create a `UserDefinedNetwork` object to specify the custom network configuration.
+
Example `UserDefinedNetwork` manifest:
Expand All @@ -23,23 +25,23 @@ Example `UserDefinedNetwork` manifest:
apiVersion: k8s.ovn.org/v1
kind: UserDefinedNetwork
metadata:
name: udn-l2-net # <1>
namespace: my-namespace # <2>
name: udn-l2-net
namespace: my-namespace
spec:
topology: Layer2 # <3>
layer2:
role: Primary # <4>
topology: Layer2
layer2:
role: Primary
subnets:
- "10.0.0.0/24"
- "2001:db8::/60"
ipam:
lifecycle: Persistent # <5>
- "2001:db8::/60"
ipam:
lifecycle: Persistent
----
<1> Specifies the name of the `UserDefinedNetwork` custom resource.
<2> Specifies the namespace in which the VM is located. The namespace must have the `k8s.ovn.org/primary-user-defined-network` label. The namespace must not be `default`, an `openshift-*` namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO).
<3> Specifies the topological configuration of the network. The required value is `Layer2`. A `Layer2` topology creates a logical switch that is shared by all nodes.
<4> Specifies whether the UDN is primary or secondary. The `Primary` role means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
<5> Specifies that virtual workloads have consistent IP addresses across reboots and migration. The `spec.layer2.subnets` field is required when `ipam.lifecycle: Persistent` is specified.
** `metadata.name` specifies the name of the `UserDefinedNetwork` custom resource.
** `metadata.namespace` specifies the namespace in which the VM is located. The namespace must have the `k8s.ovn.org/primary-user-defined-network` label. The namespace must not be `default`, an `openshift-*` namespace, or match any global namespaces that are defined by the Cluster Network Operator (CNO).
** `spec.topology` specifies the topological configuration of the network. The required value is `Layer2`. A `Layer2` topology creates a logical switch that is shared by all nodes.
** `spec.layer2.role` specifies whether the UDN is primary or secondary. The `Primary` role means that the UDN acts as the primary network for the VM and all default traffic passes through this network.
** `spec.layer2.ipam.lifecycle` specifies that virtual workloads have consistent IP addresses across reboots and migration. The `spec.layer2.subnets` field is required when `ipam.lifecycle: Persistent` is specified.

. Apply the `UserDefinedNetwork` manifest by running the following command:
+
Expand Down
5 changes: 3 additions & 2 deletions modules/virt-creating-udn-namespace-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,11 @@ kind: Namespace
metadata:
name: my-namespace
labels:
k8s.ovn.org/primary-user-defined-network: "" # <1>
k8s.ovn.org/primary-user-defined-network: ""
# ...
----
<1> This label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the `spec.namespaceSelector` field of the `ClusterUserDefinedNetwork` custom resource.
+
The `k8s.ovn.org/primary-user-defined-network` label is required for the namespace to be associated with a UDN. If the namespace is to be used with an existing cluster UDN, you must also add the appropriate labels that are defined in the `spec.namespaceSelector` field of the `ClusterUserDefinedNetwork` custom resource.

. Apply the `Namespace` manifest by running the following command:
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
//

:_mod-docs-content-type: PROCEDURE
[id="virt-preventing-nvidia-operands-from-deploying-on-nodes_{context}"]
[id="virt-preventing-nvidia-gpu-operands-from-deploying-on-nodes_{context}"]
= Preventing NVIDIA GPU operands from deploying on nodes

[role="_abstract"]
Expand All @@ -17,8 +17,9 @@ If you use the link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator
.Procedure
// Cannot label nodes in ROSA/OSD, but can edit machine pools
* Label the node by running the following command:
+

ifndef::openshift-rosa,openshift-dedicated[]
+
[source,terminal]
----
$ oc label node <node_name> nvidia.com/gpu.deploy.operands=false
Expand All @@ -28,8 +29,9 @@ where:
+
`<node_name>`:: Specifies the name of a node where you do not want to install the NVIDIA GPU operands.
endif::openshift-rosa,openshift-dedicated[]
+

ifdef::openshift-rosa,openshift-dedicated[]
+
[source,terminal]
----
$ rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> nvidia.com/gpu.deploy.operands=false
Expand Down
5 changes: 2 additions & 3 deletions modules/virt-verify-status-bootsource-update.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -81,9 +81,8 @@ status:
status: {}
# ...
----
+
`status.dataImportCronTemplates.status.commonTemplate`:: Indicates a system-defined boot source.
`status.dataImportCronTemplates.status`:: Indicates a custom boot source.
** `status.dataImportCronTemplates.status.commonTemplate` specifies a system-defined boot source.
** `status.dataImportCronTemplates.status` specifies a custom boot source.

. Verify the status of the boot source by reviewing the `status.dataImportCronTemplates.status` field.
* If the field contains `commonTemplate: true`, it is a system-defined boot source.
Expand Down
Loading