Just FWIW. I have been experimenting with imaging flash. Using R-Studio, ReclaiMe Pro and UFS Explorer, the ones that I consider ‘the big three’. I don’t know how well this extrapolates to ACE DE. In all experiments Stabilizer was used, all with same time-out (300 ms) and reset options (power-cycle). Speed optimization was set to standard.
What’s the big deal? The big deal is that we’ll see more and more flash based drives with read instabilities that we will need to image/clone somehow. The situation is kind of a catch 22. If the drive becomes unresponsive a power cycle is the only thing that can wake it up, and yet each power cycle is one step closer to the death of the device.
EDIT: I was informed by R-Studio that you can prevent R-Studio from accessing drives ‘excessively’ using a safe-mode switch. I will update post after I have tried that.
Was a disaster, I am not even going to detail that. Already starts with detection of unstable flash connected over USB via Stabilizer. I don’t know what it’s trying to do but it is unresponsive for minutes during drive discovery phase, resulting in multiple power cycles by USB Stabilizer. Their dialogs during imaging suggest you can modify some stuff on the fly, but doing that causes the interface to become unresponsive. At one point Windows blue screened can’t tell if this is due to Stabilizer or R-Studio.
You can not fine tune imaging options precisely enough and a disk map is missed. I will get to that when I get to UFS. The minimum large block size is too large. Skip can not be precisely configured, minimum is too large and is in MB rather than sectors, I fail to see logic in that. Optimal values that I later determined in UFS can not be applied in ReclaiMe because minimum values simply don’t allow for it.
ReclaiMe behaved stable, and with some adjustments to the imaging module, I am convinced it will be on par with UFS with regards to imaging flash based drives. By adjustments I mean allow for smaller large block sizes (128 KB was way too large in this experiment), Skip size should be in sectors rather than MB and should allow for smaller values (in this experiment 24 sectors was the best solution) and a true 1 block per LBA sector disk map helps determining these optimal values.
Best use standard Windows IO. It’s more stable than other options.
Disk map is extremely useful as each block is an actual LBA address. Using disk map I was able to figure out ideal block size and ideal number of sectors to skip after error. That dramatically reduced the number of errors (I could do multiple reads before an error occurred), reduced situations where USB STBLZR was forced to power cycle, reduced number if blocks skipped. Results in 4x speed and amount of data actually being read during first pass.