• This topic has 0 replies, 1 voice, and was last updated July 5, 2023 by .

Failover Test for long period

  • Hi there,

    I am currently part of a team attempting to test a failover of a number of servers in various VPG’s and test them for at least 48 hours (these are requirements set by our customers).
    We have 134 Vm’s in the Zerto dashboard and are testing 104 of them as part of the failover test.
    The issue is that as time passes by the dashboard for the VPG’s slowly goes red everywhere. By the end we have bitmap syncing occuring on any VPG that is not part of the failover test. This appears to force the Zerto environment to slowly grind to a halt
    We have edited some journal sizes and scratch journals to help give enough space for the changes etc.
    I am now trying to work out what else is available to me to aleiviate the issues of bandwith build up – If I am correct (I have only started working in this department since Feb) the z-VRA agents on each physical host are struggling to replicate the changes taken on the remaining 30 servers (every 10 minutes) and also manage the changes and tests that are being done on the 104 VM’s that are now stood up in the DR environment after the zerto test is begun.

    My question is would increasing CPU/RAM on the z-VRA’s help this issue (if I am reading the situation correctly)? – I have read in the Zerto documentation that more than 2 cores in a z-VRA should only ever be done after consultation with a Zerto technician.
    The physical hosts we have have scope for the z-VRA increases, if this is a safe and pertinent thing to do.

    Is anyone able to advise or suggest a different approach? (I am learning all the time)

The forum ‘Support Q & A’ is closed to new topics and replies.