From: Michael Tremer <michael.tremer@ipfire.org>
To: development@lists.ipfire.org
Subject: Re: [PATCH 1/2] libvirt: Update to version 10.10.0
Date: Mon, 23 Dec 2024 12:29:09 +0100 [thread overview]
Message-ID: <3F8E18BC-CEFE-48AE-BC3F-4530BDEFF69C@ipfire.org> (raw)
In-Reply-To: <20241220114002.78636-1-adolf.belka@ipfire.org>
[-- Attachment #1: Type: text/plain, Size: 19310 bytes --]
Reviewed-by: Michael Tremer <michael.tremer(a)ipfire.org>
Thank you very much for looking at this so quickly :)
-Michael
> On 20 Dec 2024, at 12:40, Adolf Belka <adolf.belka(a)ipfire.org> wrote:
>
> - Update from version 10.7.0 to 10.10.0
> - Update of rootfile
> - version 10.7.0 had a change in it which meant that the script friendly output of
> ``virsh list --uuid`` was replaced. This change was reverted in version 10.8.0
> - In version 10.8.0 libyajl was replaced by json-c for JSON parsing and formatting.
> Therefore this patch set also removes libyajl from IPFire as it is no longer
> required.
> - Changelog
> 10.10.0
> New features
> * qemu: add multi boot device support on s390x
> For classical mainframe guests (i.e. LPAR or z/VM installations), you
> always have to explicitly specify the disk where you want to boot from (or
> "IPL" from, in s390x-speak -- IPL means "Initial Program Load").
> In the past QEMU only used the first device in the boot order to IPL from.
> With the new multi boot device support on s390x that is available with QEMU
> version 9.2 and newer, this limitation is lifted. If the IPL fails for the
> first device with the lowest boot index, the device with the second lowest
> boot index will be tried and so on until IPL is successful or there are no
> remaining boot devices to try.
> Limitation: The s390x BIOS will try to IPL up to 8 total devices, any
> number of which may be disks or network devices.
> * qemu: Add support for versioned CPU models
> Updates to QEMU CPU models with -vN suffix can now be used in libvirt just
> like any other CPU model.
> * qemu: Support for the 'data-file' QCOW2 image feature
> The QEMU hypervisor driver now supports QCOW2 images with 'data-file'
> feature present (both when probing form the image itself and when specified
> explicitly via ``<dataStore>`` element). This can be useful when it's
> required to keep data "raw" on disk, but the use case requires features
> of the QCOW2 format such as incremental backups.
> * swtpm: Add support for profiles
> Upcoming swtpm release will have TPM profile support that allows to
> restrict a TPM's provided set of crypto algorithms and commands. Users can
> now select profile by using ``<profile/>`` in their TPM XML definition.
> Improvements
> * qemu: Support UEFI NVRAM images on block storage
> Libvirt now allows users to use block storage as backend for UEFI NVRAM
> images and allows them to be in format different than the template. When
> qcow2 is used as the format, the images are now also auto-populated from the
> template.
> * qemu: Automatically add IOMMU when needed
> When domain of 'qemu' or 'kvm' type has more than 255 vCPUs IOMMU with EIM
> mode is required. Starting with this release libvirt automatically adds one
> (or turns on the EIM mode if there's IOMMU without it).
> * ch: allow hostdevs in domain definition
> The Cloud Hypervisor driver (ch) now supports ``<hostdev/>``-s.
> * ch: Enable callbacks for ch domain events
> The Cloud Hypervisor driver (ch) now supports emitting events on domain
> define, undefine, start, boot, stop and destroy.
> Bug fixes
> * qemu: Fix reversion and inactive deletion of internal snapshots with UEFI
> NVRAM. In `v10.9.0 (2024-11-01)`_ creation of internal snapshots of VMs
> with UEFI firmware was allowed, but certain operations such as reversion
> or inactive deletion didn't work properly as they didn't consider the
> NVRAM qcow2 file.
> * virnetdevopenvswitch: Warn on unsupported QoS settings
> For OpenVSwitch vNICs libivrt does not set QoS directly using 'tc' but
> offloads setting to OVS. But OVS is not as feature full as libvirt in this
> regard and setting different 'peak' than 'average' results in vNIC always
> sticking with 'peak'. Produce a warning if that's the case.
> 10.9.0
> New features
> * qemu: zero block detection for non-shared-storage migration
> Users can now request that all-zero blocks are not transferred when migrating
> non-shared disk data without actually enabling zero detection on the disk
> itself. This allows sparsifying images during migration where the source
> has no access to the allocation state of blocks at the cost of CPU overhead.
> This feature is available via the ``--migrate-disks-detect-zeroes`` option
> for ``virsh migrate`` or ``VIR_MIGRATE_PARAM_MIGRATE_DISKS_DETECT_ZEROES``
> migration parameter. See the documentation for caveats.
> Improvements
> * qemu: internal snapshot improvements
> The qemu internal snapshot handling code was updated to use modern commands
> which avoid the problems the old ones had, preventing use of internal
> snapshots on VMs with UEFI NVRAM. Internal snapshots of VMs using UEFI are
> now possible provided that the NVRAM is in ``qcow2`` format.
> The new code also allows better control when deleting snapshots. To prevent
> possible regressions no strict checking is done, but in case inconsistent
> state is encountered a log message is added::
> warning : qemuSnapshotActiveInternalDeleteGetDevices:3841 : inconsistent
> internal snapshot state (deletion): VM='snap' snapshot='1727959843'
> missing='vda ' unexpected='' extra=''
> Users are encouraged to report any occurence of the above message along
> with steps they took to the upstream tracker.
> * qemu: improve documentation of image format settings
> The documentation of the various ``*_image_format`` settings in ``qemu.conf``
> imply they can only be used to control compression of the image. The
> documentation has been improved to clarify the settings describe the
> representation of guest memory blocks on disk, which includes compression
> among other possible layouts.
> * Report CPU model blockers in domain capabilities
> When a CPU model is reported as usable='no' an additional
> ``<blockers model='...'>`` element is added for that CPU model listing
> features required by the CPU model, but not supported on the host.
> 10.8.0
> Improvements
> * network: make networks with ``<forward mode='open'/>`` more useful
> It is now permissable to have a ``<forward mode='open'>`` network that
> has no IP address assigned to the host's port of the bridge. This
> is the only way to create a libvirt network where guests are
> unreachable from the host (and vice versa) and also 0 firewall
> rules are added on the host.
> It is now also possible for a ``<forward mode='open'/>`` network to
> use the ``zone`` attribute of ``<bridge>`` to set the firewalld zone of
> the bridge interface (normally it would not be set, as is done
> with other forward modes).
> * storage: Lessen dependancy on the ``showmount`` program
> Libvirt now automatically detects presence of ``showmount`` during runtime
> as we do with other helper programs and also the
> ``daemon-driver-storage-core`` RPM package now doesn't strongly depend on it
> if the users wish for a more minimal deployment.
> * Switch from YAJL to json-c for JSON parsing and formatting
> The parser and formatter in the libvirt library, as well
> as the parsers in the nss plugin were rewritten to use json-c
> instead of YAJL, which is effectively dead upstream.
> * Relax restrictions for memorytune settings
> It should now be possible to use resctrl on AMD CPUs as well as Intel CPUs
> when the resctrl filesystem is mounted with ``mba_MBps`` option.
> Bug fixes
> * virsh: Fix script-friedly output of ``virsh list --uuid``
> The script-friendly output of just 1 UUID per line was mistakenly replaced
> by the full human-targetted table view full of redundant information
> and very hard to parse. Users who wish to see the UUIDs in the tabular
> output need to use ``virsh list --table --uuid`` as old behaviour was
> reverted.
> Note that this also broke the ``libvirt-guests`` script. The bug was
> introduced in `v10.7.0 (2024-09-02)`_.
> * network/qemu: fix some cases where ``device-update`` of a network
> interface was failing:
> * If the interface was connected to a libvirt network that was
> providing a pool of VFs to be used with macvtap passthrough
> mode, then *any* update to the interface would fail, even
> changing the link state. Updating (the updateable parts of) a
> macvtap passthrough interface will now succeed.
> * It previously was not possible to move an interface from a Linux
> host bridge to an OVS bridge. This (and the opposite direction)
> now works.
> * qemu: backup: Fix possible crashes when running monitoring commands during
> backup job The qemu monitor code was fixed to not crash in specific cases
> when monitoing APIs are called during a backup job.
> * Fix various memleaks and overflows
> Multiple memory leaks and overflows in corner cases were fixed based on
> upstream issues reported.
> * network: Better cleanup after disappeared networks
> If a network disappeared while virtnetworkd was not running not all clean up
> was done properly once the daemon was started, especially when only the
> network interface disappeared. This could have in some cases resulted in
> the network being shown as inactive, but not being able to start.
> * qemu: Remember memory backing directory for domains
> If ``memory_backing_dir`` is changed during the lifetime of a domain with
> file backed memory, files in the old directory would not be cleaned up once
> the domain is shut down. Now the directory that was used during startup is
> remembered for each running domain.
>
> Signed-off-by: Adolf Belka <adolf.belka(a)ipfire.org>
> ---
> config/rootfiles/packages/libvirt | 74 +++++++++++++++++++++++++++++--
> lfs/libvirt | 8 ++--
> 2 files changed, 74 insertions(+), 8 deletions(-)
>
> diff --git a/config/rootfiles/packages/libvirt b/config/rootfiles/packages/libvirt
> index 32fdd5cce..55bd39a4e 100644
> --- a/config/rootfiles/packages/libvirt
> +++ b/config/rootfiles/packages/libvirt
> @@ -87,16 +87,16 @@ usr/bin/virt-xml-validate
> #usr/lib/libvirt
> #usr/lib/libvirt-admin.so
> usr/lib/libvirt-admin.so.0
> -usr/lib/libvirt-admin.so.0.10007.0
> +usr/lib/libvirt-admin.so.0.10010.0
> #usr/lib/libvirt-lxc.so
> usr/lib/libvirt-lxc.so.0
> -usr/lib/libvirt-lxc.so.0.10007.0
> +usr/lib/libvirt-lxc.so.0.10010.0
> #usr/lib/libvirt-qemu.so
> usr/lib/libvirt-qemu.so.0
> -usr/lib/libvirt-qemu.so.0.10007.0
> +usr/lib/libvirt-qemu.so.0.10010.0
> #usr/lib/libvirt.so
> usr/lib/libvirt.so.0
> -usr/lib/libvirt.so.0.10007.0
> +usr/lib/libvirt.so.0.10010.0
> #usr/lib/libvirt/connection-driver
> usr/lib/libvirt/connection-driver/libvirt_driver_ch.so
> usr/lib/libvirt/connection-driver/libvirt_driver_interface.so
> @@ -247,29 +247,73 @@ usr/share/libvirt/cpu_map/x86_486.xml
> usr/share/libvirt/cpu_map/x86_Broadwell-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Broadwell-noTSX-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Broadwell-noTSX.xml
> +usr/share/libvirt/cpu_map/x86_Broadwell-v1.xml
> +usr/share/libvirt/cpu_map/x86_Broadwell-v2.xml
> +usr/share/libvirt/cpu_map/x86_Broadwell-v3.xml
> +usr/share/libvirt/cpu_map/x86_Broadwell-v4.xml
> usr/share/libvirt/cpu_map/x86_Broadwell.xml
> usr/share/libvirt/cpu_map/x86_Cascadelake-Server-noTSX.xml
> +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v1.xml
> +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v2.xml
> +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v3.xml
> +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v4.xml
> +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v5.xml
> usr/share/libvirt/cpu_map/x86_Cascadelake-Server.xml
> usr/share/libvirt/cpu_map/x86_Conroe.xml
> +usr/share/libvirt/cpu_map/x86_Cooperlake-v1.xml
> +usr/share/libvirt/cpu_map/x86_Cooperlake-v2.xml
> usr/share/libvirt/cpu_map/x86_Cooperlake.xml
> +usr/share/libvirt/cpu_map/x86_Denverton-v1.xml
> +usr/share/libvirt/cpu_map/x86_Denverton-v2.xml
> +usr/share/libvirt/cpu_map/x86_Denverton-v3.xml
> +usr/share/libvirt/cpu_map/x86_Denverton.xml
> +usr/share/libvirt/cpu_map/x86_Dhyana-v1.xml
> +usr/share/libvirt/cpu_map/x86_Dhyana-v2.xml
> usr/share/libvirt/cpu_map/x86_Dhyana.xml
> usr/share/libvirt/cpu_map/x86_EPYC-Genoa.xml
> usr/share/libvirt/cpu_map/x86_EPYC-IBPB.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Milan-v1.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Milan-v2.xml
> usr/share/libvirt/cpu_map/x86_EPYC-Milan.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v1.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v2.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v3.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v4.xml
> usr/share/libvirt/cpu_map/x86_EPYC-Rome.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-v1.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-v2.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-v3.xml
> +usr/share/libvirt/cpu_map/x86_EPYC-v4.xml
> usr/share/libvirt/cpu_map/x86_EPYC.xml
> +usr/share/libvirt/cpu_map/x86_GraniteRapids-v1.xml
> usr/share/libvirt/cpu_map/x86_GraniteRapids.xml
> usr/share/libvirt/cpu_map/x86_Haswell-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Haswell-noTSX-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Haswell-noTSX.xml
> +usr/share/libvirt/cpu_map/x86_Haswell-v1.xml
> +usr/share/libvirt/cpu_map/x86_Haswell-v2.xml
> +usr/share/libvirt/cpu_map/x86_Haswell-v3.xml
> +usr/share/libvirt/cpu_map/x86_Haswell-v4.xml
> usr/share/libvirt/cpu_map/x86_Haswell.xml
> usr/share/libvirt/cpu_map/x86_Icelake-Client-noTSX.xml
> usr/share/libvirt/cpu_map/x86_Icelake-Client.xml
> usr/share/libvirt/cpu_map/x86_Icelake-Server-noTSX.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v1.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v2.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v3.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v4.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v5.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v6.xml
> +usr/share/libvirt/cpu_map/x86_Icelake-Server-v7.xml
> usr/share/libvirt/cpu_map/x86_Icelake-Server.xml
> usr/share/libvirt/cpu_map/x86_IvyBridge-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_IvyBridge-v1.xml
> +usr/share/libvirt/cpu_map/x86_IvyBridge-v2.xml
> usr/share/libvirt/cpu_map/x86_IvyBridge.xml
> +usr/share/libvirt/cpu_map/x86_KnightsMill.xml
> usr/share/libvirt/cpu_map/x86_Nehalem-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_Nehalem-v1.xml
> +usr/share/libvirt/cpu_map/x86_Nehalem-v2.xml
> usr/share/libvirt/cpu_map/x86_Nehalem.xml
> usr/share/libvirt/cpu_map/x86_Opteron_G1.xml
> usr/share/libvirt/cpu_map/x86_Opteron_G2.xml
> @@ -278,16 +322,38 @@ usr/share/libvirt/cpu_map/x86_Opteron_G4.xml
> usr/share/libvirt/cpu_map/x86_Opteron_G5.xml
> usr/share/libvirt/cpu_map/x86_Penryn.xml
> usr/share/libvirt/cpu_map/x86_SandyBridge-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_SandyBridge-v1.xml
> +usr/share/libvirt/cpu_map/x86_SandyBridge-v2.xml
> usr/share/libvirt/cpu_map/x86_SandyBridge.xml
> +usr/share/libvirt/cpu_map/x86_SapphireRapids-v1.xml
> +usr/share/libvirt/cpu_map/x86_SapphireRapids-v2.xml
> +usr/share/libvirt/cpu_map/x86_SapphireRapids-v3.xml
> usr/share/libvirt/cpu_map/x86_SapphireRapids.xml
> +usr/share/libvirt/cpu_map/x86_SierraForest-v1.xml
> +usr/share/libvirt/cpu_map/x86_SierraForest.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Client-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Client-noTSX-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Client-v1.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Client-v2.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Client-v3.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Client-v4.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Client.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Server-IBRS.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Server-noTSX-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Server-v1.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Server-v2.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Server-v3.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Server-v4.xml
> +usr/share/libvirt/cpu_map/x86_Skylake-Server-v5.xml
> usr/share/libvirt/cpu_map/x86_Skylake-Server.xml
> +usr/share/libvirt/cpu_map/x86_Snowridge-v1.xml
> +usr/share/libvirt/cpu_map/x86_Snowridge-v2.xml
> +usr/share/libvirt/cpu_map/x86_Snowridge-v3.xml
> +usr/share/libvirt/cpu_map/x86_Snowridge-v4.xml
> usr/share/libvirt/cpu_map/x86_Snowridge.xml
> usr/share/libvirt/cpu_map/x86_Westmere-IBRS.xml
> +usr/share/libvirt/cpu_map/x86_Westmere-v1.xml
> +usr/share/libvirt/cpu_map/x86_Westmere-v2.xml
> usr/share/libvirt/cpu_map/x86_Westmere.xml
> usr/share/libvirt/cpu_map/x86_athlon.xml
> usr/share/libvirt/cpu_map/x86_core2duo.xml
> diff --git a/lfs/libvirt b/lfs/libvirt
> index a497ed868..ed076a781 100644
> --- a/lfs/libvirt
> +++ b/lfs/libvirt
> @@ -26,7 +26,7 @@ include Config
>
> SUMMARY = Server side daemon and supporting files for libvirt
>
> -VER = 10.7.0
> +VER = 10.10.0
>
> THISAPP = libvirt-$(VER)
> DL_FILE = $(THISAPP).tar.xz
> @@ -35,9 +35,9 @@ DIR_APP = $(DIR_SRC)/$(THISAPP)
> TARGET = $(DIR_INFO)/$(THISAPP)
> SUP_ARCH = x86_64 aarch64
> PROG = libvirt
> -PAK_VER = 36
> +PAK_VER = 37
>
> -DEPS = ebtables libpciaccess libyajl qemu
> +DEPS = ebtables libpciaccess qemu
>
> SERVICES = libvirtd virtlogd
>
> @@ -49,7 +49,7 @@ objects = $(DL_FILE)
>
> $(DL_FILE) = $(DL_FROM)/$(DL_FILE)
>
> -$(DL_FILE)_BLAKE2 = 331f8c01395c70536ac094a156810f93cd85aab9f25bdde40633698a27f5863cb5c88c520199a5182318f376cb1a3484f3c487da74a41925a521c4a305c51f13
> +$(DL_FILE)_BLAKE2 = 8042ce1493c3ffd6e6deeb7d94d0744da18850fe416480487a57ffd33bf3390f587849f308aad12fd38c887628f90137ba717ea11ef7e0f73a97b157fa985a6e
>
> install : $(TARGET)
> check : $(patsubst %,$(DIR_CHK)/%,$(objects))
> --
> 2.47.1
>
prev parent reply other threads:[~2024-12-23 11:29 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-20 11:40 Adolf Belka
2024-12-20 11:40 ` [PATCH 2/2] libyajl: Removal of addon as no longer required by libvirt Adolf Belka
2024-12-23 11:29 ` Michael Tremer
2024-12-23 11:29 ` Michael Tremer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3F8E18BC-CEFE-48AE-BC3F-4530BDEFF69C@ipfire.org \
--to=michael.tremer@ipfire.org \
--cc=development@lists.ipfire.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox