/r/HyperV
all microsoft virtualilzation topics covered here:
Blog spam is not permitted.
/r/HyperV
I have been doing some testing on our HA cluster for failure recovery. I observed that if I remove the power from a node the VM in failover cluster goes to "Unmonitored" status and it takes 4 minutes before the VM that was on the failed node to move and restart on another node. I assume this is the default setting? Is there anything I can do to decrease the time? Thanks
I have a 32TB server formatted in RAID 5. I have so far tried set up 2 separate Ubuntu VM's to work as a Lancache, 24.04 LTS and 22.04.5 LTS. I have pointed HyperV to create the VM on the RAID array as it's where all the space is. Both times after getting the VM's set up with Lancache and starting to use it my server has BSOD'd. (It has never happened before I started using the VM's) I have the Memory dump and plugged it into WINDBG but I cannot understand what it's telling me, can anyone look at this and tell me what I'm doing wrong?
INVALID_IO_BOOST_STATE (13c)
A thread exited with an invalid I/O boost state. This should be zero when
a thread exits.
Arguments:
Arg1: ffffc68cf4dca040, Pointer to the thread which had the invalid boost state.
Arg2: 0000000000000001, Current boost state.
Arg3: 0000000000000001
Arg4: 0000000000000000
Debugging Details:
------------------
KEY_VALUES_STRING: 1
Key : Analysis.CPU.mSec
Value: 2187
Key : Analysis.Elapsed.mSec
Value: 2209
Key : Analysis.IO.Other.Mb
Value: 0
Key : Analysis.IO.Read.Mb
Value: 1
Key : Analysis.IO.Write.Mb
Value: 0
Key : Analysis.Init.CPU.mSec
Value: 968
Key : Analysis.Init.Elapsed.mSec
Value: 7378
Key : Analysis.Memory.CommitPeak.Mb
Value: 100
Key : Analysis.Version.DbgEng
Value: 10.0.27725.1000
Key : Analysis.Version.Description
Value: 10.2408.27.01 amd64fre
Key : Analysis.Version.Ext
Value: 1.2408.27.1
Key : Bugcheck.Code.KiBugCheckData
Value: 0x13c
Key : Bugcheck.Code.LegacyAPI
Value: 0x13c
Key : Bugcheck.Code.TargetModel
Value: 0x13c
Key : Failure.Bucket
Value: 0x13C_nt!PspThreadDelete
Key : Failure.Hash
Value: {6fe88179-1572-f8e2-aeff-cd9c600ecf3a}
Key : Hypervisor.Enlightenments.Value
Value: 68669340
Key : Hypervisor.Enlightenments.ValueHex
Value: 417cf9c
Key : Hypervisor.Flags.AnyHypervisorPresent
Value: 1
Key : Hypervisor.Flags.ApicEnlightened
Value: 1
Key : Hypervisor.Flags.ApicVirtualizationAvailable
Value: 0
Key : Hypervisor.Flags.AsyncMemoryHint
Value: 0
Key : Hypervisor.Flags.CoreSchedulerRequested
Value: 0
Key : Hypervisor.Flags.CpuManager
Value: 1
Key : Hypervisor.Flags.DeprecateAutoEoi
Value: 0
Key : Hypervisor.Flags.DynamicCpuDisabled
Value: 1
Key : Hypervisor.Flags.Epf
Value: 0
Key : Hypervisor.Flags.ExtendedProcessorMasks
Value: 1
Key : Hypervisor.Flags.HardwareMbecAvailable
Value: 0
Key : Hypervisor.Flags.MaxBankNumber
Value: 0
Key : Hypervisor.Flags.MemoryZeroingControl
Value: 0
Key : Hypervisor.Flags.NoExtendedRangeFlush
Value: 0
Key : Hypervisor.Flags.NoNonArchCoreSharing
Value: 1
Key : Hypervisor.Flags.Phase0InitDone
Value: 1
Key : Hypervisor.Flags.PowerSchedulerQos
Value: 0
Key : Hypervisor.Flags.RootScheduler
Value: 0
Key : Hypervisor.Flags.SynicAvailable
Value: 1
Key : Hypervisor.Flags.UseQpcBias
Value: 0
Key : Hypervisor.Flags.Value
Value: 4722927
Key : Hypervisor.Flags.ValueHex
Value: 4810ef
Key : Hypervisor.Flags.VpAssistPage
Value: 1
Key : Hypervisor.Flags.VsmAvailable
Value: 1
Key : Hypervisor.RootFlags.AccessStats
Value: 1
Key : Hypervisor.RootFlags.CrashdumpEnlightened
Value: 1
Key : Hypervisor.RootFlags.CreateVirtualProcessor
Value: 1
Key : Hypervisor.RootFlags.DisableHyperthreading
Value: 0
Key : Hypervisor.RootFlags.HostTimelineSync
Value: 1
Key : Hypervisor.RootFlags.HypervisorDebuggingEnabled
Value: 0
Key : Hypervisor.RootFlags.IsHyperV
Value: 1
Key : Hypervisor.RootFlags.LivedumpEnlightened
Value: 1
Key : Hypervisor.RootFlags.MapDeviceInterrupt
Value: 1
Key : Hypervisor.RootFlags.MceEnlightened
Value: 1
Key : Hypervisor.RootFlags.Nested
Value: 0
Key : Hypervisor.RootFlags.StartLogicalProcessor
Value: 1
Key : Hypervisor.RootFlags.Value
Value: 1015
Key : Hypervisor.RootFlags.ValueHex
Value: 3f7
Key : SecureKernel.HalpHvciEnabled
Value: 0
Key : WER.OS.Branch
Value: vb_release
Key : WER.OS.Version
Value: 10.0.19041.1
BUGCHECK_CODE: 13c
BUGCHECK_P1: ffffc68cf4dca040
BUGCHECK_P2: 1
BUGCHECK_P3: 1
BUGCHECK_P4: 0
FILE_IN_CAB: MEMORY.DMP
FAULTING_THREAD: ffffc68cfc8e6040
BLACKBOXBSD: 1 (!blackboxbsd)
BLACKBOXNTFS: 1 (!blackboxntfs)
BLACKBOXPNP: 1 (!blackboxpnp)
BLACKBOXWINLOGON: 1
PROCESS_NAME: System
STACK_TEXT:
ffffc280`3eb17a28 fffff801`475f20b3 : 00000000`0000013c ffffc68c`f4dca040 00000000`00000001 00000000`00000001 : nt!KeBugCheckEx
ffffc280`3eb17a30 fffff801`4742d940 : ffffc68c`f4dca010 ffffc68c`f4dca010 fffff801`470d59f0 00000000`00000000 : nt!PspThreadDelete+0x203ab3
ffffc280`3eb17aa0 fffff801`4705ac67 : 00000000`00000000 00000000`00000000 fffff801`470d59f0 ffffc68c`f4dca040 : nt!ObpRemoveObjectRoutine+0x80
ffffc280`3eb17b00 fffff801`470d5a62 : 00000000`00000000 00000000`00000000 00000000`00000000 ffffc68c`f4dca498 : nt!ObfDereferenceObjectWithTag+0xc7
ffffc280`3eb17b40 fffff801`47022525 : ffffc68c`fc8e6040 ffffc68c`ed4bc2c0 ffffc68c`ed4bc2c0 ffffc68c`00000000 : nt!PspReaper+0x72
ffffc280`3eb17b70 fffff801`47129905 : ffffc68c`fc8e6040 00000000`00000080 ffffc68c`ed4c4040 00003631`746e6900 : nt!ExpWorkerThread+0x105
ffffc280`3eb17c10 fffff801`47207368 : ffffaf80`73100180 ffffc68c`fc8e6040 fffff801`471298b0 00000300`00003600 : nt!PspSystemThreadStartup+0x55
ffffc280`3eb17c60 00000000`00000000 : ffffc280`3eb18000 ffffc280`3eb12000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x28
SYMBOL_NAME: nt!PspThreadDelete+203ab3
MODULE_NAME: nt
IMAGE_NAME: ntkrnlmp.exe
STACK_COMMAND: .process /r /p 0xffffc68ced4c4040; .thread 0xffffc68cfc8e6040 ; kb
BUCKET_ID_FUNC_OFFSET: 203ab3
FAILURE_BUCKET_ID: 0x13C_nt!PspThreadDelete
OS_VERSION: 10.0.19041.1
BUILDLAB_STR: vb_release
OSPLATFORM_TYPE: x64
OSNAME: Windows 10
FAILURE_ID_HASH: {6fe88179-1572-f8e2-aeff-cd9c600ecf3a}
Followup: MachineOwner
Not a particularly important question, but for your file structure, do you:
Put all your VMs in the same folders under
Volume\Virtual Hard Disks
Volume\Virtual Machines
Volume\Snapshots
Give each VM its own folder, as below:
Volume\VM1\Snapshots
Volume\VM1\Virtual Hard Disks
Volume\VM1\Virtual Machines
Volume\VM2\Snapshots
Volume\VM2\Virtual Hard Disks
Volume\VM2\Machines
Or do you do it some other way, and why?
Attempting to build a guest cluster on hyper-v for SQL. I’ve created two virtual machines with standard VHDX files for the OS, and VHD set (.vhds) files for the shared storage. Both virtual machines have the same VHD set files attached. Proceeded through the basic VM and cluster configuration, installed SQL in active/passive. That all works great. We’ve migrated a couple databases and have tested the live failover functionality.
HOWEVER, VM level backups with Veeam have been a nightmare. These only succeed once after the VM has been restarted, and then fail every consecutive time with error 32774 stating that the file (specifically one of the VHDS files) is in use. I’ve been combing through the sysinternals suite (i.e., procmon, handler) to see what processes have locks on these files before, during and after the backup job but there’s nothing out of the ordinary, and nothing that hasn’t already released the disk. I’ve been working with Veeam for weeks on this, may need to involve MSFT as well. Veeam may just be returning the error from VMMS, which is present in the hyper-v host logs.
I also have noticed that when I shutdown the virtual machines (only these two with the VHDS files) it fails to merge. Inside the folder containing the VHDS files there are a lot of avhdx files and some others. I assume Veeam is continually generating these on the one backup job that’s successful after a reboot and then they are failing to merge.
Has anyone had success with VHD Set files and guest clusters?
Hello,
I have 2 hyper-v hosts both of them has 4 VM internal switches and several vms.
I would like to install a ras role on a VM and connect all 4 VM switches to this VM to work as gateway for the vms connected to the VM switches, and if some VM wants to connect to a VM on the other hosts the ras will route this over one additional external switch to the right VM on the other host?
Is something like that possible?
Thank you for your answers.
Helo guys, I do not like to come here for help, I would rather be hre to help instead, but I am having a rare issues with hyperv.
So I have a 8m nod hyperv cluster, we are upgrading from 2016 to 2019, so currently this is de scenario:
node1 Windows Server 2019
node2 Windows Server 2019
node4 Windows Server 2019
node5 Windows Server 2019
node6 Windows Server 2016
node7 Windows Server 2016
node8 Windows Server 2019
node9 Windows Server 2019
Two nodes, both with 2019, are unable to migrate VMs to any hosts other than between themselves, BUT ONLY IF the VM has been started on either of them. Theses nodes are node4 and node 8.
So, I create and start TESTVM1 on node 5, with CPU compatibility enabled for the migration. I can move it around to node 1, then to node 4, then to node 8, then back to node 5, no problem, everything is just fine.
But if I start the VM on node 4, I can only migrate it to node 8 and viceversa. Both live migration and Quick Migration, fail the latter returning an error about not being able to boot from saved state.
So I took specifically this TESTVM1 and nodes 5 and 8 for troubleshooting. Node 8 has been rebuilt last week from scratch to upgrade it from 2016 to 2019, node 5 works fine and was also rebuilt a few months ago.
I made sure both nodes are on same BIOS version, because I thought this could be related to specter vulnerability, but after upgrading BIOS, the issues remains the same. Made sure to have network card drivers upgraded and all of that.
I even created a new VM with no disk, no network card, no nothing, and the issues is exactly the same.
Events are not very helpful, just stating there was an error in the migration operation (21502,21111,21026).
I found in hyperv worker op logs the vbelow two 1840 events:
[Virtual machine 18BD25.....] onecore\vm\worker\migration\workertaskmigrationsource.cpp(711)\vmwp.exe!00007FF6DC5B819C: (caller: 00007FF6DC5BB75E) Exception(5) tid(3bbc) 80042001 CallContext:[\SourceMigrationTask]
[Virtual machine 18BD2...] onecore\vm\worker\migration\workertaskmigrationsource.cpp(281)\vmwp.exe!00007FF6DC5BB77E: (caller: 00007FF6DC5B90AD) Exception(6) tid(3bbc) 80042001 CallContext:[\SourceMigrationTask]
On FailOverClustering log I found event 1252 with error '0x310032'.
In clusterlog, we get errors 0x80048016 and 2147778582.
CompareVM just states the same as the eventlog 21026, there was an error in the migration operation.
So, all error codes found point to say that the VM is in a state that rejects live migration, but I cannot figure out what is going on.
Afte troubleshooting with my team, I asked copilot, chatGPT, google, bing, found other reddit posts and microsoft.learn posts with siilar issues, reading documentation, you name it, but I cannot find a solution.
Wow, this was long I hope I explained myself properly. Hopefully someone can throw some ideas! Thanks.
I really need some help with this. I've tried so many different things to get a WIN7 machine running as a VM and it always ends up with a starting Win7 and Blue Screen. Can you guys give me any pointers. I already sunk at least 10 hours into this. Any hints help. Thank you.
Haven't been able to get hyper v to work so I thought I would delete the driver's and reinstalled them but then I realized after deleting two of them it might not be easy to get them back.
The two still installed are these: Microsoft Hyper-V Virtual Machine Bus Provider Microsoft Hyper-N Virtualization Infrastructure Driver
What other ones do I need and how do I go about getting them back?
[Resolved]
Hi everyone,
I have an old custom Desktop that I made with a Asus P6TD Deluxe, i7 960 3200 core and 16 gig of ram. I am using it now to create Labs and learn the various OS we have available in the market. The issue I am coming across is due to the age of the CPU I can't use VMware. I learned Hyper-V is usable on my windows 10 pro OS. I got the Windows Server 2025 working on the Generation 1 method but I was wondering if the hardware capabilities of the desktop is limiting it from running the Hyper-v using Generation 2. When I attempt to use Generation 2 I keep getting the no ISO file found but it was mounted as a DVD in the settings.
If it is due to the hardware then great but if someone might know how to resolve this problem I would greatly appreciate the help.
Edit: It seems the ISO was not being seen from the DVD when configured through the wizard prompts. The solution was to create a VM without any ISO mounted in it. Edit the VM and add a dvd with the ISO attached.
Need some assistance here with some configuration constraints.
Requirements are:
Hyper-v Cluster using local storage (HCI) 2 or 3 node
S2D for storage (cannot use starwind - only native Microsoft)
Only nic available is a 2 port 10/25GB no RDMA/iWARP capabilities
Nics are rated in the Server Hardware List as:
Compute (Standard)
Management
Storage (Premium)
Currently these are configured using SET and a vnic for management as standalone nodes.
Seems the following is the high level items needed?
anything else or words of wisdom.. like don't do S2D cause the hardware is not certified :)
thanks in advance!
I read (old, over 10 years) blog posts and similar how VMWare supports overcommit, while Hyper-V does not.
Has this changed recently?
(I know the Dynamic Memory feature, but it is not the same)
What’s good everyone.. I been stressing my self out for weeks trying to figure out best route to go..so I have a gaming pc I kinda went overkill with (10900k 64gb ram rtx 3080 hybrid) I wanted to make it so I could use it for multiple things like playing my retro collection on it and run my plex server in it etc…but then I started to not wanting use the pc for what I built it for and ended up making a second pc for no damn reason all it does it torrent .. host plex.. and host a 7 days to die server for me and my wife….i think it’s time to lose the second pc and move most of it to my main pc….my question is what’s a good number of cpu and ram to allocate to it.. most people tend to be having a lot of vm but I’m simply looking to just have one.. I want it to be able to handle remote gaming as to note disturb my screens and to host my plex server .. I was thinking maybe 8 virtual cores and roughly like 20gb of ram.. I have a2000 I can pass through to vm for gaming .. what would you guys do in this situation for best performance..
Hello everyone,
We recently purchase a couple of HP ProLiant Gen10 servers to replace our aging ones. For budget reasons and given the amount of VMs onsite has shrunk, it was decided to only purchase servers and not renew the storage bay.
Our goal is to move from our current vCenter infrastructure (1 physical vCenter + 2 ESXis + 1 HP P2000 SAN) to a Hyper-V failover cluster.
The configuration we have now is both servers have 6*500GB SSD for a total of 3TB each and set up on RAID5.
For both servers, I installed a Windows Server 2022 (Datacenter) on a 150GB partition and making a second partition for the rest of the storage to put everything related to the VMs on it. My thought was to have this second partition used as an SMB share gathering both second partitions or something similar so the VM files could be shared and still accessible in case of a failure.
So far I only have 3 VMs move to one Hyper-V, the other one just has the base configuration and could be rebuilt without any issue.
It's my first time building a Hyper-V failover cluster, I still have a lot to learn and I'm getting a bit lost in all the options and I'm starting to think not having a storage bay is going to make it hard if not impossible to build what we want. I'm also very unsure what I did is correct.
Could you guide and advise me on what to do ? This is most likely too vague but I'll be on the lookout to answer you quickly.
Hello everyone,
i am new to HyperV but i wanted to test it out. I already have a Proxmox and ESXi enviroment but i wanted to test it anyways. When i did abenchmark on one VM it tourned out to be realy bad... i am talking about like 30% worse than Proxmox or ESXi. The seeting on all VMs were 8 cores and 8GB RAM.
Is that normal or am i missing something?
I couldn't find a single guide on the Internet showing step-by-step process of creating a Linux template in SCVMM. I tried using the "Create VM template" option under Library but it throws an error on the last step saying that I cannot create a template from a VM. Coming from VMware, this is very surprising.
Hi, I do have fresh WS 2016, installed Hyper-V and Virtualization features.
Virtual Switch is configured as External with Allow Management Operating System to share this network adapter. Host is Dell R530 with Broadcom NetXtreme NIC.
Each Guest VM started does have missing Ethernet. Few OSs were tested with the same result.
Main NIC is set for static IP, host do have internet connections.
Any idea what could go wrong? This is the first time to be honest I do see such problem in years of running Hyper-V on Windows Server or Win PRO
Hyper-V is going crazy ... whats happen here?
New User
I've created a Windows 10 VM, using Hyper V
Each time I start the VM, it appears that Windows is installed as if for the first time.
Obviously, this takes a bit of time.
Is this normal and if not, how I can I improve my setup.
Thanks
Hi all,
We support a small customer with about 15-20 users. Their setup has grown over time as their application landscape has changed, so there are now 4 physical HV hosts in play 1 - Contains their VDI VMs 2 - Contains app and file server VMs 3 - RDS gateway & SQL VMs 4 - Hyper-V replica target for the server VMs (not the VDI). All prod hosts are SSD.
There is a single GigE switch connecting all this, overall working well- the server VMs from all hosts replicate to the single target host with hyper v replication and has been solid.
While this works, they want higher speeds / less app latency. I was thinking the following might optimize performance nicely:
-Collapse the 3 servers to a single host. There are only a couple server VMs on each current host due to weak CPU resources.
-Did the math for all VM resource needs and could collapse all VMs onto a single HyperV server with dual platinum CPUs, 384GB RAM, all flash local attached storage
-Put a 10GbE NIC in this new prod host and also in the HyperV replica target host and direct connect them, use this for the replication traffic
-Get a new GigE switch with 10GbE uplinks and connect the prod HyperV host via 10GbE.
This way the VMs won’t have to communicate with each other over GigE like they do now and would all communicate at backplane speeds of the new prod host.
The replica traffic would be segmented and on higher performing dedicated network.
All 1GbE endpoints could funnel into the host with 10G NIC vs 1GbE in the current hosts.
Any feedback appreciated. In my mind this should help performance but I’m not a HyperV guru.
Thanks all
I'm trying to run games in a Windows 11 VM with GPU passthrough enabled, using an NVIDIA GPU. The setup recognizes the GPU in device manager, but when I launch Cyberpunk 2077, it opens briefly and then closes without any error messages. I've installed all necessary dependencies, including Visual C++ Redistributables, DirectX, and .NET Framework, and other games give me similar issues(for eg. FIFA). GeForce Experience setup doesn't detect the GPU. The Enhanced session mode is enabled. Does anyone know how to troubleshoot this kind of setup or had similar experiences with GPU passthrough and gaming on a VM? Any help or tips would be appreciated!
If I start the virtual machine I get a normal looking windows start up and am able to login to windows if I simply close this dialog box:
https://ibb.co/HHx4sQT
However, if I click connect on the dialog box, then I lose the option to login. And the screen looks like this:
https://ibb.co/n79PZD9
I would like to be able to adjust my display resolution to full screen and would obviously have to click connect to do so. Does anyone have a solution for this issue?
I would like to use a VM for work purposes, ideally such so that I could fullscreen the VM and essentially use it as if it were "on the metal", so to speak, with complete audio & video pass-through and acceleration, etc.
The objective being a clear separation of personal and work OS, data, security, etc., but without any slowdown or "hiccups" of otherwise using a VM. Ideally, it'd be nearly transparent, at least from an acceleration perspective.
The last time I investigated this (over a year ago), GPU passthrough was the problem. Are we any closer?
I tried some basic searching, and there is discussion about SR-IOV (?), and it sounds like Windows Server might get/have GPU acceleration now, but Windows 11 (client?) is not/will not?
I've got a Dell R650 Host at a Colo that lost all network connectivity upon Hyper-V switch creation. Can only get to the box via iDRAC. Windows Server 2022 Datacenter.
Initial creation was with AllowLBFO as they had already set up a teamed NIC.
After that failed, I deleted that team and created a SET Switch.
Colo says it is VLAN #### so I set that in the switch but that didn't help. Nor do I think it's necessary as the other Hyper-V hosts are fine without it.
Have tried everything I can think of. Even tried deleting the switch and putting the regular team back and still nothing.
Have asked colo to see if traffic is exiting the box at all but that is a waiting game.
Im trying to do a pass through but Im getting this error that this objects cannot be copied cause its in use, what should I do to make this happen? any clues.
here is the script that Im trying to use:
Hey everyone, I've been facing this issue for a while now. I have 3 PCs and 1 NAS storage device (Synology DiskStation DS224+), with each PC running around 10 VMs. I am using Hyper-V and connecting them to the NAS device using an external network. For some reason, my internet speed is getting extremely slow, and my network load is very high. This is the network switch I am using: TP-Link TL-SG1008D 8-Port Gigabit Ethernet. I'm not sure what I might be doing wrong. I read an article suggesting that I should turn off:
It seemed to work for a while, but now I’m facing the same issue again—everything is getting extremely laggy, and I’m not sure what to do.
More Notes:
My home network setup:
there are 2 other people living with me who are connected to the main home network, but I didn’t include them in my diagram.
My NAS Resource Monitor:
I know some but not a lot about this stuff. I'm trying to create a VM that me and 4 of my friends can connect to (at different times) so that we can all play Madden franchise on together...yes, obviously very important. Using Windows 11 host and also as guest, I can get everything working to a reasonable degree. I used the script below to pass through my GPU (AMD RX 7900xtx) to the guest and I can play the game at about 30-40 fps, so workable but not great. The thing is, I also tried using VMware Workstation, and although there are gaming compatibility issues with the VMware display adapter, overall the VMware VM works a lot better. VM ware is a much smoother experience whereas Hyper-V seems to stutter a lot. I used the same configurations for both machines listed below. I'm just wondering what I can do, if anything to get the Hyper-V VM working better.
(1) CPU - Ryzen 3700x, 8 virtual cores
16gb fixed RAM (Guest) of 32gb (Host) system memory
110gb virtual disk (single file)
Guest Services Enabled
Checkpoints Disabled
Secure Boot and TPM enabled
VM and VHD running on separate physical hard drive (SATA SSD) from host C: drive.
Windows 11 24H2
Powershell Script used for GPU Passthrough:
$vm = "YourVM Name"
if (Get-VMGpuPartitionAdapter -VMName $vm -ErrorAction SilentlyContinue) {
Remove-VMGpuPartitionAdapter -VMName $vm
}
Set-VM -GuestControlledCacheTypes $true -VMName $vm
Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vm
Set-VM -HighMemoryMappedIoSpace 32Gb -VMName $vm
Add-VMGpuPartitionAdapter -VMName $vm
Thanks for your help!
I manage 12 Hyper-V servers. After adding all the servers into the console for the first time, they normally reappear every time I load the console, as you would expect...........except after a random amount of time, they are all deleted. From this point forward, the settings are no longer saved, so when I add all the servers back into the console, the next close/reopen or restart my machine and they are not restored.
This has happened on multiple workstations, running both Windows 10 and 11, so this problem doesn't seem to be unique to the machine I am working on.
I always run the console as admin (it doesn't work any other way, due to our company security policies).
I have tried performing the MMC cleanup action "The files in your profile that store these console changes yada yada", this does nothing.
Hello
In your experience have you ever used a DELL T360 with Intel Xeon 2436 (basic CPU), SSD Disk and 64GB Ram with Windows 2022 and Hyper-V ?
I want to install 2 VM based on Windows 2022 :
- DC with file server
- SQL Server with a small DB
This configuration can works ?
My best regards
Hello.
We have a PowerShell script that uses the cmdlet Get-VM
. We are wanting to switch the scheduled task running this script under a service account. I am unable to locate anywhere online where to setup least privilege permissions for this service account. It does not work with the role Hyper-V Administrators
but does work when added to the Administrators
group on the Hyper-V server. Is there a less permissive role I can grant this account?
I have been looking at using Authorization Manager, but we do not have the InitialStore.xml
file in the directory C:\ProgramData\Microsoft\Windows\Hyper-V on the server.
I created a Hyper-V virtual machine to build a reference Win 10 image. Now I want to mount the image and sysprep it. File Explorer shows a single .vhdx and several .avhdx files corresponding each checkpoint I created along the way.
I searched how to mount a checkpointed version of my VM. Instructions indicate
That's all well and good but it seems a manually intensive process (esp if one has created a lot of checkpoints!). Is there a script or other method that automates the process of merging all parent/children files???? It seems an awful lot of work to do each time that someone might have simplified (I hope)