When uploading some files are reporting progress over 100%
Many files - large in size-- only part way through before going back to 0% and restarting.
Using Mac client with ACD. version 4957 i think (number listed above quit option).
What’s causing these issues?
It is difficult to say. It could be a number of things. How large are the files? Can you submit a diagnostic from the odrive tray the next time you see one of the files behave in this manner and then provide the name of that file?
I keep having this issue.
It’s been happening for month. My file is just around 700 MBs - but I am having to use the OneDrive web interface to upload my file(s) like this because the uploads don’t complete! It’s very frustrating.
Should I be submitting a diagnostic?
Yes please submit a diagnostic and give us the path of the file that is having issues.
Here’s the path: C:\Users\reese\odrive\OneDrive\Personal\Videos\Pirate Galaxy\CQs\Dark Brotherhood Clan\
The filename: PG-CQ-Antares-Goya-10-03-2015.flv - 766 MB (803,469,172 bytes)
I have submitted a diagnostic.
@Tony Have you made any progress on diagnosing the problem? Is it a timeout issue?
I took a look at OneDrive is thinking that you do not have a current session on the upload. It is expiring after an hour. We have logic to try to combat this, so I am not sure why it is not preventing this. We will need to take a deeper look.
Unfortunately OneDrive tries to validate the session at the end of the upload instead of the beginning and their tokens expire after an hour…
I hope you make some progress soon - it is quite bad. Again, it is stopping me from uploading most files that are larger than 300 MBs. I suppose it is to do with the speed of my internet connection but Microsoft’s OneDrive sync program does this much better than Odrive currently…
Any ETA on a fix for this? I see you fixed some issues for uploading to Amazon S3 - really hoping that you can sort OneDrive - really annoying that I have to use the OneDrive’s web interface to upload a file that odrive cannot…
Can you submit another diagnostic and tell me the file name? I want to make sure its the same issue I was seeing previously.
I have used Google Drive this time, and I am getting the same problem. Odrive managed to upload a 300 MB file on Google Drive, but my 800 MB file keeps restarting. Left it running all night, I’m sending a diagnostic now.
On a partially related note, I have observed that the amount of upload errors for large files (same file requiring multiple upload attempts) is inversely proportional to network speed. Uploading from our office with a fast (>2 Gbps) connection rarely causes errors. I have seen only one or two 3+GB files require a second or third upload. From home (~100Mbps) connection, errors are more frequent. Go to a location with ~25 Mbps connection, and enough files require a second or third attempt that it’s noticeable. Uploading large files from locations with <2 Mbps connection speeds becomes an exercise in frustration.
None of the native clients involved (S3, Box, Dropbox, Google, ACD, etc.) exhibit this problem. Most of the files uploaded with odrive, however, are going to encrypted locations so a direct comparison of odrive vs. native is impossible. Note also that I’m not counting the small but not insignificant number of encrypted odrive uploads that fail because of an encrypted filename that is incompatible with the storage bucket in use.
In your case, are you seeing this with a specific source or all sources? Which sources do you use?
The core issue with uploads is resume capability. Maintaining a stable network connection over time is problematic. Some sources like OneDrive have resume capability, so it’s possible to recover transmission failures. Other sources like Amazon Cloud Drive do not so the only option is to retry.
One general solution we’ve been thinking of for large files is automatic sharding. odrive would transparently break large files into a series of smaller files on upload. On download, odrive would automatically reassemble the shards. The idea would make upload recovery straightforward, and simultaneously boost performance by allowing parallel transfers.
This approach remotely stores the files as shards, which means the files are in non-native formats which can be a drawback. But if you are already using encryption, this is not a big deal.
If we had this feature, would you use it for large files?
I have seen this problem with Box for Biz, Google Cloud (Standard and DRA) and S3 Standard for work-related files and both ACD and Google Drive for personal data.
Splitting files into shards sounds ideal for encrypted storage as long as the shards contain some data integrity checks to ensure reassembled files aren’t corrupt. I would not want this to be the default for standard, non-encrypted buckets.