...so you'd rather implement your own congestion control, your own windowing, your own reordering and reassembly, rather than just using TCP and letting the kernel do all that for you? To send a single file? And hopefully this self-made congestion control cooperates with the rest of the traffic on your connection.
It doesn't sound to me like your use case would buy you much over TCP with selective ACK. Either way you have to resend dropped packets and ultimately reorder things.
For a use case like HTTP/2 with multiple "channels" I could see a reason to avoid head-of-line blocking (though I'd rather just use SCTP), but for a single file?
Actually its not the ordering thats the problem per-say its "packets in flight" Each packet needs to be accounted for, and only a certain amount of packets can be unaccounted for "in flight" before TCP starts throttling back.
This means the higher the latency, the lower number of packets a second can be transmitted.
The problem is that making a reliable transport protocol that's also fast, efficient and plays nicely on a congested network is really quite hard. (this is why aspera can charge as much as they do for what is effectively a thin wrapper around scp.)
There are a couple of ways to get fast throughput over high throughput/lossy links, the easiest is to use multiple connections. I wrote a python library that does just that. Between london and Sanfran you can expect about 12megabits a second(up to about 25 burst), with 20 concurrent connection I could get just over 150 megabits a second(I hit vpn limits then).