volume being added during failover?

  • I am working on a POC using Zerto to migrate Windows virtual machines from a VMWare on prem environment to Azure. I am running into a couple of issues with volumes that I was wondering if anyone has seen.

    The first issue is a new volume called “temporary Storage” mounted as drive d: that is created when the machine is failed over to Azure. I am wondering what this volume is for and why it shows up. If I fail back to on prem the volume is removed. We have lots of machines that already contain a drive D: mounted so this is obviously going to be a conflict.

    The second part is mounting of volumes beyond the boot volume and the newly created “temporary storage”. A good example would be a SQL server in my environment. The way we configure them we have separate volumes for SQL data, masterDB, tempDB and the pagefile. None of those volumes are being mounted when you fail the machine over to Azure. The discs are there, they are just offline, to bring them online I need to either manually do it with diskpart or drive manager. I could script it as well but it makes not sense why they would not be mounted with the original drive letters like they do on prem. If I fail a multi disc machine back to on prem the drives are all mounted normally.

     

    Has anyone ran into this? If so what did you do it fix it?

    Hi Jim,

    The Disk that is getting added is part of the Azure Platform – I believe this is called temporary storage and is used to Azure to track temp data within the VM – I think this explains it pretty well :

    Azure VM’s and their Temporary Storage

    As for the Disks not coming up online, the only thing that comes to mind is this KB:

    VPG0043 – The Microsoft default SAN policy might cause VM ‘{VM_name}’ (VPG ‘{VPG_name}’) volumes to become offline upon recovery

    If this doesn’t help id consider logging a support request

    Kind Regards

    Chris

     

     

The forum ‘Microsoft Azure’ is closed to new topics and replies.