Re: NTDF-3G slow on large file
# ntfsinfo -vF alex.img /dev/sdb1 | grep '0x.*0x' | wc
10389 31168 317666
So, there are over 10000 fragments, this is the cause for bad throughput. Note that recent ntfs-3g have been improved when creating such a fragmented file.
The image file was created with ntfsclone --rescue from a disk with read errors to an ext3 disk. Could this be the cause of the many fragments?
Probably yes. You must have created the image without the --save option, so the image is a sparse file which only contains the clusters in use, and there is at least one fragment per set of consecutive used clusters. The only way to avoid this is to fill the holes with zeroes (using cp with option --sparse=never)... but this is likely to lead to more space being required on the target device, which leads to another cause for fragmentation.
If you saved your original partition with option --save, and you are now doing a --restore, you also get a sparse file with the same consequences.
You can check whether alex.img is sparse by comparing the outputs of :
du --apparent-size alex.img
You should probably extract and copy the files from alex.img and use the big sparse file only for extracting its contents.