/r/VFIO
This is a subreddit to discuss all things related to VFIO and gaming on virtual machines in general.
What is VFIO?
VFIO stands for Virtual Function I/O. VFIO is a device driver that is used to assign devices to virtual machines. One of the most common uses of vfio is setting up a virtual machine with full access to a dedicated GPU. This enables near-bare-metal gaming performance in a Windows VM, offering a great alternative to dual-booting.
The wiki will be a one-stop shop for all things related to VFIO. Right now it has links to a small amount of resources, but it will constantly be updated.
To join the VFIO Discord server, click here.
To join the VFIO Steam group, click here.
1) No harassment
2) No shilling
3) No discussion of developing cheats
/r/VFIO
my windows 11 vm is running on a physical nvme ssd. the other nvme ssd has my fedora 41 host OS. i just wanted to boot into it and suddenly it asked me to either reset, or continue to boot, or to always continue to boot. looked like this. i clicked on always continue boot. then i got grub command line. what do i do?
hardware from fastfetch:
OS: Fedora Linux 41 (KDE Plasma) x86_64
Host: 82WK (Legion Pro 5 16IRX8)
Kernel: Linux 6.11.6-cb2.0.fc41.x86_64
Packages: 2230 (rpm), 21 (flatpak)
Shell: bash 5.2.32
Display (CSO161D): 2560x1600 @ 165 Hz (as 2134x1334) in 16" [Built-in]
Theme: Breeze (Dark) [Qt], Breeze [GTK3]
Icons: breeze-dark [Qt], breeze-dark [GTK3/4]
Font: Noto Sans (10pt) [Qt], Noto Sans (10pt) [GTK3/4]
CPU: 13th Gen Intel(R) Core(TM) i7-13700HX (24) @ 5.00 GHz
GPU 1: NVIDIA GeForce RTX 4060 Max-Q / Mobile
GPU 2: Intel Raptor Lake-S UHD Graphics @ 1.55 GHz [Integrated]
Memory: 7.09 GiB / 15.36 GiB (46%)
Swap: 0 B / 15.36 GiB (0%)
Disk (/): 87.26 GiB / 929.93 GiB (9%) - btrfs
Local IP (wlp0s20f3): no
Battery (L22X4PC0): 100% [AC Connected]
Locale: en_US.UTF-8
Hello I have a problem where I can no longer launch my VM due to more strict rules in the kernel about IOMMU groups and am I trying to fix it and would like some help I am getting these errors in dmesg when trying to run the VM I use a 3060 for my second GPU and a RX 7800 XT for my main GPU and have no idea how to get around this. any help with this would be appreicated thanks Ozzy
UPDATE: Turns out leaving Pre-boot DMA Protection enabled in the BIOS turns on some memory access hardening in the Zen Kernel preventing the card from connecting to the VM. After turning the option off my VM starts
[ 49.405643] vfio-pci 0000:05:00.0: Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. C
ontact your platform vendor.
[ 49.405653] vfio-pci 0000:05:00.0: Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. C
ontact your platform vendor.
Hello, everyone at r/VFIO,
I recently dove into setting up a gaming VM on Windows 10. I'm using Hyper-V on my Windows 10 Pro 22H2 host and created a VM with GPU-PV, allocating 80% of my RTX 3060 TI to the VM. My goal is to maximize performance while ensuring stability—hence, the 80% allocation to avoid potential system crashes.
Now, I have a few questions:
Am I on the right track? Is it essential to be on Linux with QEMU/KVM or other paravirtualization systems to get an effective gaming VM setup, or can this be done just as well with Hyper-V on a Windows 10 Pro 22H2 host (with a Windows 10 Pro 22H2 guest)?
My main issue so far is with Roblox, which seems to detect the VM due to its Hyperion and anti-VM measures. Is it normal for Hyper-V to reveal it’s a VM? From what I understand, Hyper-V doesn’t hide this fact, and making a stealthy VM often involves disabling the hypervisor, which seriously impacts performance.
Since many people seem to use similar setups, I’m curious if there are other ways to create a "stealthy gaming VM" with GPU passthrough on Windows—or if that’s mostly a Linux-exclusive advantage.
I want to add that I still have my old AMD Radeon RX580 in my possession and that it could, if ultimately needed, be used into the VM.
Source of the GPU-Para virtualization I used:
Thanks in advance to anyone who can help. Have a great day!
Is this possible on any laptop? Does having a mux switch like on the zephyrus m16 matter?
Its not important that they both display simultaneously in the sense that both can show on the screen at once, though that would be ideal. But they should be able to at least display “simultaneously” in the sense that you could alt+tab between a fullscreen vm and the host seamlessly while a game or AI workload is running in the guest.
This is referring to without external monitors—though just as a learning opportunity it would be nice to understand if the iGPU can display to the laptop monitor while the dGPU displays to an external monitor without having any limitations like “actually” routing through the iGPU or something unexpected.
Hey Everyone,
I am absolutely a beginer so if I missed any info feel free to ask me to provide it.
Ive been havning an issue where my windows 11 vm will randomly freeze and cpu usage goes down to 1%. This always happens AFTER I log in sometimes imedietly sometimes after a while (although usually sooner rather than later).
What I have done:
GRUB_CMDLINE_LINUX_DEFAULT="rd.driver.pre=vfio-pci intel_iommu=on iommu=pt video=efifb:off nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0"
/etc/modprobe.d/vfio.conf
with the GPU IDs specified as options vfio-pci ids=10de:27e0,10de:22bc
System Info:
Logs:
XML:
https://pastebin.com/ZhDnB9Za
WIN11.log
Hello .. Why does it take so long booting a vm with single gpu passthrough ? Video shows like 1.10 min to show a screen
had a working vm and full gpu passthrough updated, vm would not boot made another one now its taking the piss hers the journalctl -f -u log
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-2'
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-4'
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-4.5'
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-5'
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-6'
Nov 01 18:58:07 epicman829 libvirtd[894]: internal error: Missing udev property 'ID_VENDOR_ID' on '1-7'
Nov 01 18:58:11 epicman829 dnsmasq[991]: reading /etc/resolv.conf
Nov 01 18:58:11 epicman829 dnsmasq[991]: using nameserver 192.168.0.1#53
Nov 01 19:14:49 epicman829 libvirtd[894]: Client hit max requests limit 5. This may result in keep-alive timeo
uts. Consider tuning the max_client_requests server parameter
Nov 01 19:15:46 epicman829 libvirtd[894]: internal error: connection closed due to keepalive timeout
Nov 01 19:17:09 epicman829 libvirtd[894]: End of file while reading data: Input/output error
I have a 3070ti and AMD radeon integrated grphic card, and i wanted to passthrough my nvidia to a Virtual machine running win 10.
I havent found any guide regarding that, any help would be great!
I'm trying to run games in a Windows 11 VM with GPU passthrough enabled, using an NVIDIA GPU. The setup recognizes the GPU in device manager, but when I launch Cyberpunk 2077, it opens briefly and then closes without any error messages. I've installed all necessary dependencies, including Visual C++ Redistributables, DirectX, and .NET Framework, and other games give me similar issues(for eg. FIFA). GeForce Experience setup doesn't detect the GPU. The Enhanced session mode is enabled. Does anyone know how to troubleshoot this kind of setup or had similar experiences with GPU passthrough and gaming on a VM? Any help or tips would be appreciated!
I want to build a gaming PC, but I also need a server for a NAS. Is it worth combining both into one machine? My plan is to run TrueNAS as the base OS, and create a Windows (or maybe linux) VM for gaming. I understand that I need a dedicated GPU for the VM and am fine with it. But is this practical? or should I just get another machine for a dedicated NAS.
On the side note, how is the power consumption for setup like these? I imagine a dedicated low power NAS would consume less power overall in the long run?
Hi everyone,
I currently have an old NVIDIA GPU on which GPU passthrough works like a charm, but I want to move on to a newer GPU soon. Probably the 7800XT.
However, I have been reading some threads here and quickly realized that most AMD GPUs suffer from a reset bug :(
I would love to run Wayland on my new computer, but VFIO would also be nice. Truly a dilemma of modern times.
How do you all deal with this? Are all Wayland enjoyers on a dual setup here?
Also, in the case of a single GPU passthrough setup: Does the reset bug just prevent me from entering my host system again, after shutting down the guest system? Or does it also make single GPU passthrough impossible, since it can not even switch the GPU from host to guest system?
Thanks for reading all of my text :)
Hello guys.
I have 2 GPUs. One is RTX 4070, the second is some weak, the most basic office-level Nvidia GPU.
I play games on Linux and sometimes in my Windows vm where I do single GPU passthrough.
Now I want to detach my RTX 4070 from Linux when I want to play in Windows vm, attach the weak one to it, and pass RTX 4070 to the Windows vm, so I'd still have access to Linux. I simply want my vm with passed RTX 4070 to work in a window, because I'm tired of Windows completely taking over my pc.
How to do that?
https://www.phoronix.com/news/NVIDIA-Open-GPU-Virtualization
Apparently Nvidia has released them, but I still don't understand where or how to find them and ive searched. I basically have a Nvidia A6000 (GA102GL) setup with the open-kernel modules and drivers and my goal is to use the GPU with Incus (previously LXD) VM's and I would like to be able to split up the GPU for the VM's. I understand SR-IOV and I use it with my Mellanox cards, but I would like to (if possible) avoid paying Nvidia a licensing fee if they have released the ability to do this without a license.
Can anyone give me some insight into this ?
I'm considering this for a new build. But I'd like to know the iommu groupings beforehand if possible.
The dGPU must be isolated but would be nice if the two m.2s on this board were also isolated.
Thanks.
Hi! Looking Glass closes unexpectedly, have to start client over and over. Here is what I get. Anyone have a solution?
Hi, I have a laptop with an NVIDIA GPU and AMD CPU, I'm on Arch and followed this guide carefully https://gitlab.com/risingprismtv/single-gpu-passthrough. Upon launching the VM my GPU drivers unload but right after that my PC just reboots and next thing I see is the grub menu...
This is my custom_hooks.log
:
Beginning of Startup!
Killing xinit!
Unbinding Console 1
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA106M [GeForce RTX 3060 Mobile / Max-Q] [10de:2560] (rev a1)
System has an NVIDIA GPU
/usr/local/bin/vfio-startup: line 124: echo: write error: No such device
modprobe: FATAL: Module drm_kms_helper is builtin.
modprobe: FATAL: Module drm is builtin.
NVIDIA GPU Drivers Unloaded
End of Startup!
And this is my libvirtd.log
:
3802: info : libvirt version: 10.8.0
3802: error : virNetSocketReadWire:1782 : End of file while reading data: Input/output error
So my setup consist of a Ubuntu server with a Debian guest that has an Intel a770 16Gb passed through to it. In the Debian VM, I do a lot of transcoding with tdarr and sunshine. I also play games on the same GPU with sunshine. It honestly works perfectly with no hiccups.
However, I want the option to play some anticheat games. There are a lot of anticheat games that allow vms, so my thought was to do nested virtualization and single-gpu-passthrough where I temporarily passthrough the GPU to the Windows VM whenever I start it using sunshine. The problem is that this passed over the encoder portion as well and so I can't stream sunshine at the same time. I do have the ability to do software encoding, but you can only select this to be on all the time using sunshine. There isn't a way to dynamically select hardware or software depending on the launched game.
Is there a way to not passthrough the encoder portion or to share the encoder between Linux and a windows guest? Or is there a way to do this without passing through the GPU?
Hello I have a fun project that I am trying to figure out.
At the moment, I have 2 pc's in a production hall for CAD viewing. The current problems are that the pc's get really dirty (they are AIO's). To solve this problem, I was planning to get thin/zero clients and one corresponding server that can handle 8 and possible more (max 20) users. I have Ethernet cables running to a server room from all workspaces.
In my dive I landed on Proxmox server and thin clients that can connect to the server. CAD viewing requires a fast CPU for loading and GPU for the start of rendering and some adjustments to the 3D model. All the clients won't be using all the resources at the same time (excluding loaded models on ram). 8 or more VMs with all windows seems to be very intensive. So I saw it was possible could use FreeCAD on a Linux system. I just don't exactly know what hardware and software I should use in my situation.
Thanks for reading, I would love some advice and/or experiences :)
Hi there,
what would be the most reasonable core-pinning set-up for a mobile hybrid CPU like my Intel Ultra 155H?
This is the topography of my CPU:
Output of \"lstopo\", Indexes: physical
As you can see, my CPU features six performance cores, eight efficiency cores and two low-power cores.
Now this is how I made use of the performance cores for my VM:
Current CPU-related config of my Gaming VM
As you can see, I've pinned performance cores 2-5 and set core 1 as emulatorpin and reserved core 6 for IO threads.
I'm wondering if this is the most efficient set-up there is? From what I gathered, it is best leaving the efficiency cores out of the equation altogether, so I tried to make out the most of the six performance cores.
I'd be happy for any advice!
Hi there, I have toyed around with a single gpu passthrough in the past, but I always had problems and didnt really like that my drivers would get shut down. A bit about my setup:
-Cpu: 5800x
-Ram: 16gb
-Mainboard: Gigybyte aorus something something
-Gpu: AMD Sapphire 7900 gre
I have lying around a gt710 that i have no use for currently. Because of my monitor setup I would have to have all of them connected up to my 7900gre ports (3x1440p monitors). Would i be able to let the OS run on the gt710 while all the monitors are connected to the 7900gre and still have a passthrough using the 7900gre?
First, apologies if this is not the most appropriate place to ask this. I want to setup VFIO and I'll do that on my internal SSD first, but eventually if all is working well, I'll get an external SSD with more storage and move it there. Is that an easy thing to do?
What's the current status on the following games?
Do they work? Do they ban you?
So every time I power the vm on, I notice disk activity even when not doing anything on the windows vm.
Sometimes it's sporadic, sometimes massive.
iotop shows:
The XML:
<domain type="kvm">
<name>Win11Pro</name>
<uuid>8714edc2-23f8-4653-a43e-70f769dbd60b</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://microsoft.com/win/11"/>
</libosinfo:libosinfo>
</metadata>
<memory unit="KiB">33554432</memory>
<currentMemory unit="KiB">33554432</currentMemory>
<vcpu placement="static">8</vcpu>
<os firmware="efi">
<type arch="x86_64" machine="pc-q35-6.2">hvm</type>
<boot dev="hd"/>
</os>
<features>
<acpi/>
<apic/>
<hyperv mode="custom">
<relaxed state="on"/>
<vapic state="on"/>
<spinlocks state="on" retries="8191"/>
<vpindex state="on"/>
<runtime state="on"/>
<synic state="on"/>
<stimer state="on">
<direct state="on"/>
</stimer>
<reset state="on"/>
<vendor_id state="on" value="KVM Hv"/>
<frequencies state="on"/>
<reenlightenment state="on"/>
<tlbflush state="on"/>
<ipi state="on"/>
<evmcs state="on"/>
</hyperv>
<vmport state="off"/>
</features>
<cpu mode="host-passthrough" check="none" migratable="on">
<topology sockets="1" dies="1" cores="4" threads="2"/>
</cpu>
<clock offset="localtime">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
<timer name="hypervclock" present="yes"/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled="no"/>
<suspend-to-disk enabled="no"/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2" cache="none" discard="unmap"/>
<source file="/var/lib/libvirt/images/Win11Pro.qcow2"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
</disk>
<controller type="usb" index="0" model="qemu-xhci" ports="15">
<address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
</controller>
<controller type="pci" index="0" model="pcie-root"/>
<controller type="pci" index="1" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="1" port="0x10"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
</controller>
<controller type="pci" index="2" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="2" port="0x11"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
</controller>
<controller type="pci" index="3" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="3" port="0x12"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
</controller>
<controller type="pci" index="4" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="4" port="0x13"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
</controller>
<controller type="pci" index="5" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="5" port="0x14"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
</controller>
<controller type="pci" index="6" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="6" port="0x15"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
</controller>
<controller type="pci" index="7" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="7" port="0x16"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
</controller>
<controller type="pci" index="8" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="8" port="0x17"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
</controller>
<controller type="pci" index="9" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="9" port="0x18"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
</controller>
<controller type="pci" index="10" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="10" port="0x19"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
</controller>
<controller type="pci" index="11" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="11" port="0x1a"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
</controller>
<controller type="pci" index="12" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="12" port="0x1b"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
</controller>
<controller type="pci" index="13" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="13" port="0x1c"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
</controller>
<controller type="pci" index="14" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="14" port="0x1d"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
</controller>
<controller type="sata" index="0">
<address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
</controller>
<controller type="virtio-serial" index="0">
<address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
</controller>
<interface type="network">
<mac address="52:54:00:f6:0b:05"/>
<source network="default"/>
<model type="virtio"/>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
<serial type="pty">
<target type="isa-serial" port="0">
<model name="isa-serial"/>
</target>
</serial>
<console type="pty">
<target type="serial" port="0"/>
</console>
<channel type="spicevmc">
<target type="virtio" name="com.redhat.spice.0"/>
<address type="virtio-serial" controller="0" bus="0" port="1"/>
</channel>
<channel type="unix">
<target type="virtio" name="org.qemu.guest_agent.0"/>
<address type="virtio-serial" controller="0" bus="0" port="2"/>
</channel>
<input type="mouse" bus="ps2"/>
<input type="keyboard" bus="ps2"/>
<input type="tablet" bus="virtio">
<address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
</input>
<tpm model="tpm-crb">
<backend type="emulator" version="2.0"/>
</tpm>
<graphics type="spice">
<listen type="none"/>
<image compression="off"/>
<gl enable="no"/>
</graphics>
<sound model="ich9">
<address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/>
</sound>
<audio id="1" type="spice"/>
<video>
<model type="virtio" heads="1" primary="yes">
<acceleration accel3d="no"/>
<resolution x="1920" y="1080"/>
</model>
<address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
</video>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x67" slot="0x00" function="0x0"/>
</source>
<address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</hostdev>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x67" slot="0x00" function="0x1"/>
</source>
<address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
</hostdev>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x67" slot="0x00" function="0x2"/>
</source>
<address type="pci" domain="0x0000" bus="0x09" slot="0x00" function="0x0"/>
</hostdev>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x67" slot="0x00" function="0x3"/>
</source>
<address type="pci" domain="0x0000" bus="0x0a" slot="0x00" function="0x0"/>
</hostdev>
<redirdev bus="usb" type="spicevmc">
<address type="usb" bus="0" port="1"/>
</redirdev>
<redirdev bus="usb" type="spicevmc">
<address type="usb" bus="0" port="2"/>
</redirdev>
<memballoon model="virtio">
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</memballoon>
</devices>
</domain>
Hello.
I'm trying to boot Solaris 11.4 on FreeBSD using qemu. These are the parameters that I've used :
qemu-system-x86_64 -m 8G -smp cpus=4 -machine pc \
-global PIIX4_PM.disable_s3=1
\-global PIIX4_PM.disable_s4=1
\-vga vmware
\-netdev tap,id=mynet0,ifname=tap3,script=no,downscript=no \
-device e1000,netdev=mynet0,mac=52:55:00:d1:55:01
\-usb -device usb-mouse,bus=usb-bus.0 -k it
\-drive id=cdrom0,if=none,format=raw,readonly=on,file=/mnt/zroot2/zroot2/OS/ISO/Unix/Solaris/sol-11_4-text-x86.iso \
-device virtio-scsi-pci,id=scsi0 \
-device scsi-cd,bus=scsi0.0,drive=cdrom0 \
-rtc base=localtime
\-drive if=pflash,format=raw,file=/usr/local/share/edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \
this is what happens :
can someone tell me why the iso image is not detected ? thanks.
When I'm running Windows on baremetal everything works, overlay, screen record, when I'm in the VM adrenalin behaves in a strange way, described exactly in this topic:
https://www.reddit.com/r/VFIO/comments/uq6dpb/amd_software_behaves_strangely_if_it_detects_vm/
I put the arch wiki parameters, but it didn't work.
https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Video_card_driver_virtualisation_detection
</kvm>
<hidden state='on'/>
</kvm>
<vendor_id state='on' value='randomid'/><vendor_id state='on' value='randomid'/>
Hello. Recently, I commissioned a modchip install for my Nintendo Switch. I would like to stream my Windows 11 gaming VM to it via Sunshine/Moonlight.
My host OS is manjaro. I have a gpu passed through to the windows VM configured from libvirt qemu kvm.
Currently the VM accesses the internet through the default virtual NAT. I would prefer to more or less keep it this way.
I'm aware the common solution to create a bridge between the host and the guest, and have the guest show on the physical? real?? ..non virtualized network as just another device.
However, I wish to only forward the specific ports (47989, 47990, etc.) that sunshine/moonlight uses, so that my Switch can connect.
My struggle is with the how.
Unfortunately, I'm not getting much direction with the Arch Wiki or the Libvirt Wiki
I've come across suggestions to use tailscale or zerotier, but I'd prefer not to install/use any additional/unnecessary programs/services if I can help it.
This discussion on Stack Overflow seems be the closest to what I'm trying to achieve, I'm just not sure what to do with it.
Am I correct in assuming that after enabling forwarding in the sysctl.conf, I would add the above, with my relevant parameters, to the iptables.rules file? ...and that's it?
Admittedly, I am fairly new to linux, and pc builds in general, so I apologize if this is a dumb question. I'm just not finding many resources with this specific topic to see a solid pattern.
is it possible to passthr m40 to a windows vm and get video output using looking glass?
I am trying to use OSX KVM on a tablet computer with an AMD APU - Z1 Extreme, which has a 7xxx series equivalent AMD GPU (or 7xxM)
MacOS obviously has no native drivers for any RDNA3 card, so I was hoping there might be some way to map the calls between some driver on MacOS and my APU.
Has anyone done anything like this? If so, what steps are needed? Or is this just literally impossible right now without additional driver support?
I've got the VM booting just fine, I started looking into VFIO and it seems like it might work if the mapping is right, but this is a bit outside of my wheelhouse
I applied the rdtsc patch to my kernel in which I adjusted the function to the base speed of my cpu but it only works temporarily. If I wait out the GetTickCount() of 12 minutes in PAFish and then re-execute the program, it'll detect the vm exit. I aimed for a base speed of 0.2 GHz (3.6/18), should I adjust it further? I've already tested my adjusted qemu against a couple BattlEye games and it works fine but I fear there are others (such as Destiny 2) that use this single detection vector for bans as it's already well known that BattlEye do test for this.