IFS connection abort handling


I noticed the following. When using IFS an connection abort causes deletion of all already uploaded segments in the target folder. It starts reuploading the file from 0% and the first segment.

There must be a better way of doing that. Since 10 of X segments are already present.

best regards,

1 Like

Yes. This should not be happening. Is this in a standard or encrypted folder?

1 Like

encrypted folder on amazon drive.

1 Like

Currently resume of an uploaded IFS file in an encrypted folder does not work as it does in non-encrypted folders. This is due the the randomized nature of the encrypted filenames being generated, which resets when an exception is hit. When IFS is engaged it is not finding a match to resume from.

This is something we will look at in our next version of encryption.

1 Like

Yes, I already noticed that.

Since you abort, delete and try to start from the beginning if an error occurs, you mitigate the entire purpose of IFS!

I am trying to upload some filesystem images with IFS, which are quite large. 10-30GB.

Since the “timeout” issue while uploading large files is still an issue, I am unable to upload my files directly. They just loop.

Now using IFS, I got the same problem. More than 50% percent of the files loop. My Amazon trash-bin is huge by now. Filled with 100MB parts! It takes up much bandwidth and it won’t stop.

Even worse the sync logic of the client has many flaws!

  1. It does not update sync status in explorer while syncing or even removes all sync state icons
  2. If I am uploading a directory structure (tree), it start syncing one file each folder. Therefore I now have 9 files between 10-30GB uploading with a 10Mbps connection. (Timeouts will occur! Since it tries to upload over 100GB simultaniously!) Guess what, it deletes all progress and starts again. :frowning:
  3. The files looping an my windows PC, show up in my MAC sync client as synced and I could try to download them. Why is that? Since it seems the MAC client thinks the upload is complete, why is my windows client still looping on that file? (therefore I am not sure if this is a timeout issue or a problem with odrive’s logic. Maybe on the MAC side, maybe on the WINDOWS side. But something is quite broken here)
  4. If you select a “Looping” file in the upload queue and select to stop it, even with confirmation! It removes the files from the queue and adds it back a few seconds after. So that is broken too.
  5. it does not recognize folders in explorer as syncing. I was unable to stop a sync process of an entire folder.

The huge amount of parallelism and the “DELETE AND START AGAIN” mechanism on IFS, makes it completely unusable at the moment.

I am unable to upload my directory structure to Amazon Drive with odrive.

I think I am really flexible here. Since I can use IFS to mitigate some of the problems of the sync client.
But being completely unable to sync my files, makes it broken.

You can’t advertise and charge money for those features, if they are still this buggy/broken. 1/12 of my payed subscription is over and I have not uploaded 90% of my data, cause it just does not work. It even cost me many hours of sending diagnostics and help debugging.

Until now, I had to re-stage those problems with dummy files. Since I want to keep my privacy.
I can’t send you any more diagnostic reports, since I cannot see what I am submitting and I don’t have time to do it the other way around. If there is a way I can control/review what is in the report, I can send you another one.

1 Like

I appreciate the feedback and I apologize for the frustration this is causing.

It your primary use case right now bulk import? Bulk import using encryption on Amazon Drive is currently a soft spot, as you have seen, because of:
Amazon Drive’s difficulty getting large files up (looping).
IFS’s inability to resume when inside an encrypted container.

This is exacerbated if the client is not running against a connection that can offer very fast uploads.

Is it possible to move your very large files out of the odrive scope for now, letting odrive push up the small to medium sized files completely and then adding the larger files in a controlled manner?

The parallelism can be triggered when a new, large structure is dropped into the odrive scope (as opposed to it already existing when odrive starts up) and the file system events trigger multiple, separate sync operations. You may have better luck by moving the directory structure in while odrive is not running and then starting odrive. It should then pick up the structure from a single entry point (the top of the directory structure) and treat it as a single operation, avoiding parallelism in that structure.

I am sharing your feedback with the team and we are discussing ways to mitigate the effects you are seeing.

1 Like

Hi Tony,

as I described in a different post on the same topic, we need detailed control about the upload limits!
This would solve a lot of issues. In my case it is only possible to upload not more than ONE large file to Amazon CS to succeed. But odrive tries to send them 10 large, huge files the same time. So a payload of more than 100 GB with a 6 mbit line. This will never succeed!
Currently there is no possibility to stop this by limiting the upload count to just N. So it always fails and fails and fails.
Instead, give us the possibility to limit the max. uploads to X files and just add all failed files to the end of the upload queue (with a failed marker…to visualize the next try), so that all problematic uploads will stay at the very end and all others are already available.

This would help a LOT until all the upload to ACS is more reliable.

Btw. there is no one-by-one upload even if I drop the structure into the odrive space while it is not running. In any case odrive tries to upload much too much data for the upload line. Please let me control the behavior on my own.


1 Like