Old IFW V2.x slows down dramatically during backup
Posted: Sat Jun 11, 2022 6:24 am
I've been living with this issue for quite some time, but it wasn't such a problem for years as I had smaller drives. In the last few years, I've increased drive sizes in some of the 4 PCs I run this on, but the problem affects all of them with larger drives. One OS is XP, the others are Windows 10 and the backups all go over my gigabit LAN to a Synology DS218j with two Western Digital 2.7TB SATA drives configured as RAID 0.
I've tried all the switches, and just can't get this issue to go away. It was a problem when I backed up to a USB drive on the LAN years ago, and it's a problem today backing up to a NAS.
The problem is particularly painful on a 2TB M2 SATA drive drive in a Win10 install running with an Intel I9-9900K processor, gigabit Intel LAN card, with 32GB of RAM. My speeds start of at an acceptable level, but as the backup progresses, it crawls to 8.8MB/s or less! This problem also affects this same machine when it backs up a 1TB M2 SATA drive. It also affects the XP machine when it backs up it's larger 500MB drive, just to rule out the PCs as the problem. The backup uses VSS on the Win10 machines, and PHYLock on the XP machine (VSS refuses to work on it for whatever reason).
For example, in the below graph, I'm backing up the aforementioned 2TB M2 SATA drive on the I9-9900K Win 10 machine. The speeds start out just OK, but as time goes on, it progressively gets slower, taking over 24 hours to backup. The graph shows the progress up to 79% of the backup, with IFW showing 4 hours, 52 minutes remaining (that will increase for sure) with 18:42:30 elapsed!
Switches for this backup (mind you, I've tried all the IOBS options, pldisk:1 with pldcs:4095, removing plmem and plcs and this is the best version I've come up with):
/b /d:3 /uy /ui /hash /err /purge:14 /plvolf /plmem:0 /plcs:16384 /pldisk:0 /po:771 /plmwt:1 /pltr:0 /pltw:0 /comp:14 /usevss /logfile:"C:\Users\[removed]\Desktop\ifw.log" /savename:c_lastfullbackup /f:"t:\Fullback\[removed]\C\$~MM$-$~DD$-$~YYYY$"
It's not the computer as this consumes about 2% CPU utilization. It's not the NIC as this is no where near gigabit capacity. It's not the NAS, as that's at about 65% CPU utilization with IO wait time of generally 2% during reads and writes. None of the drives have errors, including those on the NAS. What is causing this or has this been fixed in V3.x of the software?
I've tried all the switches, and just can't get this issue to go away. It was a problem when I backed up to a USB drive on the LAN years ago, and it's a problem today backing up to a NAS.
The problem is particularly painful on a 2TB M2 SATA drive drive in a Win10 install running with an Intel I9-9900K processor, gigabit Intel LAN card, with 32GB of RAM. My speeds start of at an acceptable level, but as the backup progresses, it crawls to 8.8MB/s or less! This problem also affects this same machine when it backs up a 1TB M2 SATA drive. It also affects the XP machine when it backs up it's larger 500MB drive, just to rule out the PCs as the problem. The backup uses VSS on the Win10 machines, and PHYLock on the XP machine (VSS refuses to work on it for whatever reason).
For example, in the below graph, I'm backing up the aforementioned 2TB M2 SATA drive on the I9-9900K Win 10 machine. The speeds start out just OK, but as time goes on, it progressively gets slower, taking over 24 hours to backup. The graph shows the progress up to 79% of the backup, with IFW showing 4 hours, 52 minutes remaining (that will increase for sure) with 18:42:30 elapsed!
Switches for this backup (mind you, I've tried all the IOBS options, pldisk:1 with pldcs:4095, removing plmem and plcs and this is the best version I've come up with):
/b /d:3 /uy /ui /hash /err /purge:14 /plvolf /plmem:0 /plcs:16384 /pldisk:0 /po:771 /plmwt:1 /pltr:0 /pltw:0 /comp:14 /usevss /logfile:"C:\Users\[removed]\Desktop\ifw.log" /savename:c_lastfullbackup /f:"t:\Fullback\[removed]\C\$~MM$-$~DD$-$~YYYY$"
It's not the computer as this consumes about 2% CPU utilization. It's not the NIC as this is no where near gigabit capacity. It's not the NAS, as that's at about 65% CPU utilization with IO wait time of generally 2% during reads and writes. None of the drives have errors, including those on the NAS. What is causing this or has this been fixed in V3.x of the software?