From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michael Tremer To: development@lists.ipfire.org Subject: Re: [PATCH 1/2] libvirt: Update to version 10.10.0 Date: Mon, 23 Dec 2024 12:29:09 +0100 Message-ID: <3F8E18BC-CEFE-48AE-BC3F-4530BDEFF69C@ipfire.org> In-Reply-To: <20241220114002.78636-1-adolf.belka@ipfire.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2474749073693199129==" List-Id: --===============2474749073693199129== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Reviewed-by: Michael Tremer Thank you very much for looking at this so quickly :) -Michael > On 20 Dec 2024, at 12:40, Adolf Belka wrote: >=20 > - Update from version 10.7.0 to 10.10.0 > - Update of rootfile > - version 10.7.0 had a change in it which meant that the script friendly ou= tput of > ``virsh list --uuid`` was replaced. This change was reverted in version 1= 0.8.0 > - In version 10.8.0 libyajl was replaced by json-c for JSON parsing and for= matting. > Therefore this patch set also removes libyajl from IPFire as it is no lon= ger > required. > - Changelog > 10.10.0 > New features > * qemu: add multi boot device support on s390x > For classical mainframe guests (i.e. LPAR or z/VM installations)= , you > always have to explicitly specify the disk where you want to boo= t from (or > "IPL" from, in s390x-speak -- IPL means "Initial Program Load"). > In the past QEMU only used the first device in the boot order to= IPL from. > With the new multi boot device support on s390x that is availabl= e with QEMU > version 9.2 and newer, this limitation is lifted. If the IPL fai= ls for the > first device with the lowest boot index, the device with the sec= ond lowest > boot index will be tried and so on until IPL is successful or th= ere are no > remaining boot devices to try. > Limitation: The s390x BIOS will try to IPL up to 8 total devices= , any > number of which may be disks or network devices. > * qemu: Add support for versioned CPU models > Updates to QEMU CPU models with -vN suffix can now be used in li= bvirt just > like any other CPU model. > * qemu: Support for the 'data-file' QCOW2 image feature > The QEMU hypervisor driver now supports QCOW2 images with 'data-= file' > feature present (both when probing form the image itself and whe= n specified > explicitly via ```` element). This can be useful when= it's > required to keep data "raw" on disk, but the use case requires f= eatures > of the QCOW2 format such as incremental backups. > * swtpm: Add support for profiles > Upcoming swtpm release will have TPM profile support that allows= to > restrict a TPM's provided set of crypto algorithms and commands.= Users can > now select profile by using ```` in their TPM XML defi= nition. > Improvements > * qemu: Support UEFI NVRAM images on block storage > Libvirt now allows users to use block storage as backend for UEF= I NVRAM > images and allows them to be in format different than the templa= te. When > qcow2 is used as the format, the images are now also auto-popula= ted from the > template. > * qemu: Automatically add IOMMU when needed > When domain of 'qemu' or 'kvm' type has more than 255 vCPUs IOMM= U with EIM > mode is required. Starting with this release libvirt automatical= ly adds one > (or turns on the EIM mode if there's IOMMU without it). > * ch: allow hostdevs in domain definition > The Cloud Hypervisor driver (ch) now supports ````-s. > * ch: Enable callbacks for ch domain events > The Cloud Hypervisor driver (ch) now supports emitting events on= domain > define, undefine, start, boot, stop and destroy. > Bug fixes > * qemu: Fix reversion and inactive deletion of internal snapshots = with UEFI > NVRAM. In `v10.9.0 (2024-11-01)`_ creation of internal snapshots= of VMs > with UEFI firmware was allowed, but certain operations such as r= eversion > or inactive deletion didn't work properly as they didn't conside= r the > NVRAM qcow2 file. > * virnetdevopenvswitch: Warn on unsupported QoS settings > For OpenVSwitch vNICs libivrt does not set QoS directly using 't= c' but > offloads setting to OVS. But OVS is not as feature full as libvi= rt in this > regard and setting different 'peak' than 'average' results in vN= IC always > sticking with 'peak'. Produce a warning if that's the case. > 10.9.0 > New features > * qemu: zero block detection for non-shared-storage migration > Users can now request that all-zero blocks are not transferred w= hen migrating > non-shared disk data without actually enabling zero detection on= the disk > itself. This allows sparsifying images during migration where th= e source > has no access to the allocation state of blocks at the cost of C= PU overhead. > This feature is available via the ``--migrate-disks-detect-zeroe= s`` option > for ``virsh migrate`` or ``VIR_MIGRATE_PARAM_MIGRATE_DISKS_DETEC= T_ZEROES`` > migration parameter. See the documentation for caveats. > Improvements > * qemu: internal snapshot improvements > The qemu internal snapshot handling code was updated to use mode= rn commands > which avoid the problems the old ones had, preventing use of int= ernal > snapshots on VMs with UEFI NVRAM. Internal snapshots of VMs usin= g UEFI are > now possible provided that the NVRAM is in ``qcow2`` format. > The new code also allows better control when deleting snapshots.= To prevent > possible regressions no strict checking is done, but in case inc= onsistent > state is encountered a log message is added:: > warning : qemuSnapshotActiveInternalDeleteGetDevices:3841 : in= consistent > internal snapshot state (deletion): VM=3D'snap' snapshot=3D'= 1727959843' > missing=3D'vda ' unexpected=3D'' extra=3D'' > Users are encouraged to report any occurence of the above messag= e along > with steps they took to the upstream tracker. > * qemu: improve documentation of image format settings > The documentation of the various ``*_image_format`` settings in = ``qemu.conf`` > imply they can only be used to control compression of the image.= The > documentation has been improved to clarify the settings describe= the > representation of guest memory blocks on disk, which includes co= mpression > among other possible layouts. > * Report CPU model blockers in domain capabilities > When a CPU model is reported as usable=3D'no' an additional > ```` element is added for that CPU model= listing > features required by the CPU model, but not supported on the hos= t. > 10.8.0 > Improvements > * network: make networks with ```` more us= eful > It is now permissable to have a ```` netw= ork that > has no IP address assigned to the host's port of the bridge. This > is the only way to create a libvirt network where guests are > unreachable from the host (and vice versa) and also 0 firewall > rules are added on the host. > It is now also possible for a ```` netwo= rk to > use the ``zone`` attribute of ```` to set the firewalld = zone of > the bridge interface (normally it would not be set, as is done > with other forward modes). > * storage: Lessen dependancy on the ``showmount`` program > Libvirt now automatically detects presence of ``showmount`` duri= ng runtime > as we do with other helper programs and also the > ``daemon-driver-storage-core`` RPM package now doesn't strongly = depend on it > if the users wish for a more minimal deployment. > * Switch from YAJL to json-c for JSON parsing and formatting > The parser and formatter in the libvirt library, as well > as the parsers in the nss plugin were rewritten to use json-c > instead of YAJL, which is effectively dead upstream. > * Relax restrictions for memorytune settings > It should now be possible to use resctrl on AMD CPUs as well as = Intel CPUs > when the resctrl filesystem is mounted with ``mba_MBps`` option. > Bug fixes > * virsh: Fix script-friedly output of ``virsh list --uuid`` > The script-friendly output of just 1 UUID per line was mistakenl= y replaced > by the full human-targetted table view full of redundant informa= tion > and very hard to parse. Users who wish to see the UUIDs in the t= abular > output need to use ``virsh list --table --uuid`` as old behaviou= r was > reverted. > Note that this also broke the ``libvirt-guests`` script. The bug= was > introduced in `v10.7.0 (2024-09-02)`_. > * network/qemu: fix some cases where ``device-update`` of a network > interface was failing: > * If the interface was connected to a libvirt network that was > providing a pool of VFs to be used with macvtap passthrough > mode, then *any* update to the interface would fail, even > changing the link state. Updating (the updateable parts of) a > macvtap passthrough interface will now succeed. > * It previously was not possible to move an interface from a Lin= ux > host bridge to an OVS bridge. This (and the opposite direction) > now works. > * qemu: backup: Fix possible crashes when running monitoring comma= nds during > backup job The qemu monitor code was fixed to not crash in speci= fic cases > when monitoing APIs are called during a backup job. > * Fix various memleaks and overflows > Multiple memory leaks and overflows in corner cases were fixed b= ased on > upstream issues reported. > * network: Better cleanup after disappeared networks > If a network disappeared while virtnetworkd was not running not = all clean up > was done properly once the daemon was started, especially when o= nly the > network interface disappeared. This could have in some cases re= sulted in > the network being shown as inactive, but not being able to start. > * qemu: Remember memory backing directory for domains > If ``memory_backing_dir`` is changed during the lifetime of a do= main with > file backed memory, files in the old directory would not be clea= ned up once > the domain is shut down. Now the directory that was used during= startup is > remembered for each running domain. >=20 > Signed-off-by: Adolf Belka > --- > config/rootfiles/packages/libvirt | 74 +++++++++++++++++++++++++++++-- > lfs/libvirt | 8 ++-- > 2 files changed, 74 insertions(+), 8 deletions(-) >=20 > diff --git a/config/rootfiles/packages/libvirt b/config/rootfiles/packages/= libvirt > index 32fdd5cce..55bd39a4e 100644 > --- a/config/rootfiles/packages/libvirt > +++ b/config/rootfiles/packages/libvirt > @@ -87,16 +87,16 @@ usr/bin/virt-xml-validate > #usr/lib/libvirt > #usr/lib/libvirt-admin.so > usr/lib/libvirt-admin.so.0 > -usr/lib/libvirt-admin.so.0.10007.0 > +usr/lib/libvirt-admin.so.0.10010.0 > #usr/lib/libvirt-lxc.so > usr/lib/libvirt-lxc.so.0 > -usr/lib/libvirt-lxc.so.0.10007.0 > +usr/lib/libvirt-lxc.so.0.10010.0 > #usr/lib/libvirt-qemu.so > usr/lib/libvirt-qemu.so.0 > -usr/lib/libvirt-qemu.so.0.10007.0 > +usr/lib/libvirt-qemu.so.0.10010.0 > #usr/lib/libvirt.so > usr/lib/libvirt.so.0 > -usr/lib/libvirt.so.0.10007.0 > +usr/lib/libvirt.so.0.10010.0 > #usr/lib/libvirt/connection-driver > usr/lib/libvirt/connection-driver/libvirt_driver_ch.so > usr/lib/libvirt/connection-driver/libvirt_driver_interface.so > @@ -247,29 +247,73 @@ usr/share/libvirt/cpu_map/x86_486.xml > usr/share/libvirt/cpu_map/x86_Broadwell-IBRS.xml > usr/share/libvirt/cpu_map/x86_Broadwell-noTSX-IBRS.xml > usr/share/libvirt/cpu_map/x86_Broadwell-noTSX.xml > +usr/share/libvirt/cpu_map/x86_Broadwell-v1.xml > +usr/share/libvirt/cpu_map/x86_Broadwell-v2.xml > +usr/share/libvirt/cpu_map/x86_Broadwell-v3.xml > +usr/share/libvirt/cpu_map/x86_Broadwell-v4.xml > usr/share/libvirt/cpu_map/x86_Broadwell.xml > usr/share/libvirt/cpu_map/x86_Cascadelake-Server-noTSX.xml > +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v1.xml > +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v2.xml > +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v3.xml > +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v4.xml > +usr/share/libvirt/cpu_map/x86_Cascadelake-Server-v5.xml > usr/share/libvirt/cpu_map/x86_Cascadelake-Server.xml > usr/share/libvirt/cpu_map/x86_Conroe.xml > +usr/share/libvirt/cpu_map/x86_Cooperlake-v1.xml > +usr/share/libvirt/cpu_map/x86_Cooperlake-v2.xml > usr/share/libvirt/cpu_map/x86_Cooperlake.xml > +usr/share/libvirt/cpu_map/x86_Denverton-v1.xml > +usr/share/libvirt/cpu_map/x86_Denverton-v2.xml > +usr/share/libvirt/cpu_map/x86_Denverton-v3.xml > +usr/share/libvirt/cpu_map/x86_Denverton.xml > +usr/share/libvirt/cpu_map/x86_Dhyana-v1.xml > +usr/share/libvirt/cpu_map/x86_Dhyana-v2.xml > usr/share/libvirt/cpu_map/x86_Dhyana.xml > usr/share/libvirt/cpu_map/x86_EPYC-Genoa.xml > usr/share/libvirt/cpu_map/x86_EPYC-IBPB.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Milan-v1.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Milan-v2.xml > usr/share/libvirt/cpu_map/x86_EPYC-Milan.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v1.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v2.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v3.xml > +usr/share/libvirt/cpu_map/x86_EPYC-Rome-v4.xml > usr/share/libvirt/cpu_map/x86_EPYC-Rome.xml > +usr/share/libvirt/cpu_map/x86_EPYC-v1.xml > +usr/share/libvirt/cpu_map/x86_EPYC-v2.xml > +usr/share/libvirt/cpu_map/x86_EPYC-v3.xml > +usr/share/libvirt/cpu_map/x86_EPYC-v4.xml > usr/share/libvirt/cpu_map/x86_EPYC.xml > +usr/share/libvirt/cpu_map/x86_GraniteRapids-v1.xml > usr/share/libvirt/cpu_map/x86_GraniteRapids.xml > usr/share/libvirt/cpu_map/x86_Haswell-IBRS.xml > usr/share/libvirt/cpu_map/x86_Haswell-noTSX-IBRS.xml > usr/share/libvirt/cpu_map/x86_Haswell-noTSX.xml > +usr/share/libvirt/cpu_map/x86_Haswell-v1.xml > +usr/share/libvirt/cpu_map/x86_Haswell-v2.xml > +usr/share/libvirt/cpu_map/x86_Haswell-v3.xml > +usr/share/libvirt/cpu_map/x86_Haswell-v4.xml > usr/share/libvirt/cpu_map/x86_Haswell.xml > usr/share/libvirt/cpu_map/x86_Icelake-Client-noTSX.xml > usr/share/libvirt/cpu_map/x86_Icelake-Client.xml > usr/share/libvirt/cpu_map/x86_Icelake-Server-noTSX.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v1.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v2.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v3.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v4.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v5.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v6.xml > +usr/share/libvirt/cpu_map/x86_Icelake-Server-v7.xml > usr/share/libvirt/cpu_map/x86_Icelake-Server.xml > usr/share/libvirt/cpu_map/x86_IvyBridge-IBRS.xml > +usr/share/libvirt/cpu_map/x86_IvyBridge-v1.xml > +usr/share/libvirt/cpu_map/x86_IvyBridge-v2.xml > usr/share/libvirt/cpu_map/x86_IvyBridge.xml > +usr/share/libvirt/cpu_map/x86_KnightsMill.xml > usr/share/libvirt/cpu_map/x86_Nehalem-IBRS.xml > +usr/share/libvirt/cpu_map/x86_Nehalem-v1.xml > +usr/share/libvirt/cpu_map/x86_Nehalem-v2.xml > usr/share/libvirt/cpu_map/x86_Nehalem.xml > usr/share/libvirt/cpu_map/x86_Opteron_G1.xml > usr/share/libvirt/cpu_map/x86_Opteron_G2.xml > @@ -278,16 +322,38 @@ usr/share/libvirt/cpu_map/x86_Opteron_G4.xml > usr/share/libvirt/cpu_map/x86_Opteron_G5.xml > usr/share/libvirt/cpu_map/x86_Penryn.xml > usr/share/libvirt/cpu_map/x86_SandyBridge-IBRS.xml > +usr/share/libvirt/cpu_map/x86_SandyBridge-v1.xml > +usr/share/libvirt/cpu_map/x86_SandyBridge-v2.xml > usr/share/libvirt/cpu_map/x86_SandyBridge.xml > +usr/share/libvirt/cpu_map/x86_SapphireRapids-v1.xml > +usr/share/libvirt/cpu_map/x86_SapphireRapids-v2.xml > +usr/share/libvirt/cpu_map/x86_SapphireRapids-v3.xml > usr/share/libvirt/cpu_map/x86_SapphireRapids.xml > +usr/share/libvirt/cpu_map/x86_SierraForest-v1.xml > +usr/share/libvirt/cpu_map/x86_SierraForest.xml > usr/share/libvirt/cpu_map/x86_Skylake-Client-IBRS.xml > usr/share/libvirt/cpu_map/x86_Skylake-Client-noTSX-IBRS.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Client-v1.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Client-v2.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Client-v3.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Client-v4.xml > usr/share/libvirt/cpu_map/x86_Skylake-Client.xml > usr/share/libvirt/cpu_map/x86_Skylake-Server-IBRS.xml > usr/share/libvirt/cpu_map/x86_Skylake-Server-noTSX-IBRS.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Server-v1.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Server-v2.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Server-v3.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Server-v4.xml > +usr/share/libvirt/cpu_map/x86_Skylake-Server-v5.xml > usr/share/libvirt/cpu_map/x86_Skylake-Server.xml > +usr/share/libvirt/cpu_map/x86_Snowridge-v1.xml > +usr/share/libvirt/cpu_map/x86_Snowridge-v2.xml > +usr/share/libvirt/cpu_map/x86_Snowridge-v3.xml > +usr/share/libvirt/cpu_map/x86_Snowridge-v4.xml > usr/share/libvirt/cpu_map/x86_Snowridge.xml > usr/share/libvirt/cpu_map/x86_Westmere-IBRS.xml > +usr/share/libvirt/cpu_map/x86_Westmere-v1.xml > +usr/share/libvirt/cpu_map/x86_Westmere-v2.xml > usr/share/libvirt/cpu_map/x86_Westmere.xml > usr/share/libvirt/cpu_map/x86_athlon.xml > usr/share/libvirt/cpu_map/x86_core2duo.xml > diff --git a/lfs/libvirt b/lfs/libvirt > index a497ed868..ed076a781 100644 > --- a/lfs/libvirt > +++ b/lfs/libvirt > @@ -26,7 +26,7 @@ include Config >=20 > SUMMARY =3D Server side daemon and supporting files for libvirt >=20 > -VER =3D 10.7.0 > +VER =3D 10.10.0 >=20 > THISAPP =3D libvirt-$(VER) > DL_FILE =3D $(THISAPP).tar.xz > @@ -35,9 +35,9 @@ DIR_APP =3D $(DIR_SRC)/$(THISAPP) > TARGET =3D $(DIR_INFO)/$(THISAPP) > SUP_ARCH =3D x86_64 aarch64 > PROG =3D libvirt > -PAK_VER =3D 36 > +PAK_VER =3D 37 >=20 > -DEPS =3D ebtables libpciaccess libyajl qemu > +DEPS =3D ebtables libpciaccess qemu >=20 > SERVICES =3D libvirtd virtlogd >=20 > @@ -49,7 +49,7 @@ objects =3D $(DL_FILE) >=20 > $(DL_FILE) =3D $(DL_FROM)/$(DL_FILE) >=20 > -$(DL_FILE)_BLAKE2 =3D 331f8c01395c70536ac094a156810f93cd85aab9f25bdde40633= 698a27f5863cb5c88c520199a5182318f376cb1a3484f3c487da74a41925a521c4a305c51f13 > +$(DL_FILE)_BLAKE2 =3D 8042ce1493c3ffd6e6deeb7d94d0744da18850fe416480487a57= ffd33bf3390f587849f308aad12fd38c887628f90137ba717ea11ef7e0f73a97b157fa985a6e >=20 > install : $(TARGET) > check : $(patsubst %,$(DIR_CHK)/%,$(objects)) > --=20 > 2.47.1 >=20 --===============2474749073693199129==--