Upload speed slowing down after a while

Hi,

currently I´m uploading a lot of files to ACD. When I start uploading the files via odrive I´m getting really good upload speeds. odrive is maxing out my internet connection (45 mbit/s upstream) I notice, that after a few hours of uploading, the speed is drastically dropping. I´m seeing speeds of maybe 4 Mbit/s. Also the odrive menue feels a bit unresponsive after some hours. Once I restart odrive, speeds go up suddenly to the maximum. As I’m looking forward to upload my files as fast as possible, this is a bit of a downside of odrive. Is this a known problem. Is there a fix for that behavior? Thx in advance!

Hi @aletsee,
Can you tell me the makeup of the files that are transferring? Are they small, large, or a mix of the two?
Are you noticing a gradual decline in speed over the time period, or is it a sudden drop?
You stated a “few hours”. Is this something like 3-4 hours?

Thanks!

hi tony,

thx for the quick reply. It is a mixture of a lot of raw photos (maybe 50000- 70000, 50 MB each) and large mkv blueray files (up to 15 Gb each) Its a gradual decline. Timeframe of about 4 hours seems to be legit. I´m on the latest macOS sierra system.

Thanks for the details. I am going to se if we can reproduce this.

One thing I wanted to clarify:
It could be possible that the number of concurrent transfers is being reduced as you progress through the uploads. This could result in slowdown of overall speed as the simultaneous streams are reduced. Amazon Drive is an integration that performance much better with more than one transfer, if your bandwidth can support it.

Are you able to see what files are uploading during the ~4 hours, and if it corresponds at all to the concurrency? You can check the status of the uploads from the odrive tray menu.

You can also potentially get more details by running this CLI command from the terminal on MacOS:

python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --uploads

I don´t think, that ìm running out of uploads, there are many files left.
I can’t chance the amount of files odrive is uploading simultaneously, can I?

Here are the details of the terminal output:

Endrohre_01.psb 32%
SP_Kofferraum_01.psb 19%
SP-Line_3-4Front_02.psb 23%
SP-Line_3-4Heck_43638_01.psb 8%
Spiegelkappen_01.psb 46%
SP-Line_Fahrerseite_01.psb 15%
The Constant Gardener.mp4 34%
Front-01.psb 18%
SP_Anhängerkupplung_01.psb 95%
Felge_01.psb 28%

Thanks @aletsee

We are currently working on a number of things that will improve upload performance, control, and consistency.

See if you can keep an eye on the concurrency as you notice things slow down. We will see if there is any correlation.

@aletsee: Odrive appears to only upload one file at a time per folder (sometimes per folder tree). This behavior limits users with faster internet connections. For example, running odrive at the office with a speedy connection, I have seen upload bandwidth over 400 Mbps (barely over 10% of our total bandwidth) when odrive is uploading multiple sync folders each of which contains many subfolders. Odrive consistently throttles each upload slot to 2-5 Mbps no matter what cloud provider is being used or how much bandwidth is available.

Uploading single folder full of large files is a worst-case scenario (again, only for those with high-bandwidth pipes). Odrive only processes one file at a time and does so comparatively slowly. In these occasions, I recommend using either the cloud provider’s native interface or a different sync tool that does not have odrive’s limits.

I’m having identical issues with an Aperture photo library - about 200GB worth. If I kill and restart odrive, I see very good transfer rates for a few minutes before it reverts back to it’s painfully slow behavior.

I seem to have hit all the criteria - very fast internet connection, many, many files (with lots of RAW in there as well), and using ACD.

Using the native uploader isn’t an option for two reasons: ACD’s uploader sucks, and I’m using odrive’s encryption capabilities to keep my library secure.

Any thoughts or insights on how to speed this up? I am seriously contemplating a script to shutdown and restart odrive every 10 minutes…but that is a terrible, terrible hack I shouldn’t need to do, don’t you think? :slight_smile:

An additional UI note – after ~15-20 minutes, the menubar icon becomes unresponsive when I click on it…it won’t respond to click actions at all.

Thanks!

I should add…this is on a Mac, latest Sierra, etc, etc.

Hi @HJD,
Upload behavior is one of the key things we are currently focusing on in Engineering.

I would be interested to know is if the slowness you are observing is because concurrency is reduced after odrive “settles”. This is the behavior I would expect, especially from Amazon Drive. Amazon’s single file upload speed is fairly slow, comparatively, but each concurrent stream (to a point) multiplies to total transfer speed. Other services have faster single-stream upload, so the drop in concurrency isn’t noticed as much as it is for Amazon because the overall speed is not as affected.

Interestingly enough, the larger concurrency you tend to see on an odrive restart is actually unintended behavior. On init, the current odrive client has a high chance not falling under the intended concurrency parameters. On fast connections this can result is a very high amount of concurrency and faster overall speed on Amazon Drive. Once files start completing, the concurrency drops to the intended levels, which may seem too conservative (i.e. slow) for very fast connections like yours.

In our next major release this whole piece will be overhauled and it will allow better control and consistency.

Any updates on this? I am having the same issue, it is fast (80mb/sec) for a few hours, then slows to 5mb/sec.

I’m trying to upload 30 TB (all from external drives to google drive. it is going, I just have to restart odrive every 6 hours!

Hi @egarner,
As per my post above, do you notice a difference in concurrency, or are you seeing this difference with single-file uploads? I upload to Google Drive a lot, but I haven’t performed anything close to a sustained upload with 30TB.

It sounds like you are uploading ~1.75 - 2TB and then things start slowing down?

No, things upload about 300gb (fast - 50-80mb/sec).
Then slow to a crawl after that. Restarting gets fast speed again.

I’m close to just writing a cron job to stop and restart odrive… just to get all this initially uploaded.

Hi @egarner,
Sorry, I read the mb initially as megabytes instead of megabits.

Generally I will see this behavior because, on start, odrive will end up uploading many large files at the same time and then taper off to only one large file at a time. By design it is only supposed to upload one large file at a time (over 100MB), but there is a bug where, on start, it can end up uploading several at teh same time. For faster connections this can lead to improved speeds. I explain it more here: Upload speed slowing down after a while

This is something we are improving in the next major release, but it is not ready yet.

If you want to schedule a script/task to shutdown and restart, I would suggest using the CLI to perform the shutdown. There was a thread here that touched on some similar ideas, if you want to take a look: Odrive CLI shutdown- Finish uploads option

Seeing the same issue. Rsync’d diffs between 3 different machines to clean up about ~300gb of mixed files [images, code, documents, maybe a handful of “large” files]. Discovered odrive and am now trying to use this as a sync solution between these machines. Using Amazon s3, ACD, Google drive - trying to consolidate on S3 but things are taking a very long time to go up. I have synchronous gigabit fiber and i’m only seeing ~20-30mbps of network utilization at peak. Often far slower. How can I saturate my pipe?

Hi @aaron.caito,
At this point, depending on the storage source, you may not be able to saturate the connection. Since odrive will likely reduce down to one concurrent upload for files over 100MB, it becomes dependent on the storage source’s single-file upload capabilities. My own testing has shown that Google Drive’s is pretty good, but S3 … not so much.

Here is a sample of some upload testing I performed a while ago:

@Tony

I’m not familiar with odrive’s architecture. Is it a wrapper for api requests per item in sync list? Can i pass command line arguments to the agent to tune concurrency or create rules for concurrency based on file size? I do AWS consulting as my day gig, rather busy to dig too deep for personal projects but we could try to get a bit deeper on this.


http://docs.aws.amazon.com/AmazonS3/latest/dev/PerformanceOptimization.html
https://aws.amazon.com/about-aws/whats-new/2016/04/transfer-files-into-amazon-s3-up-to-300-percent-faster/

Testing on the following site: http://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html
I was able to achieve: http://i.imgur.com/VTnvQjm.png with transfer acceleration enabled. Results: Add this key to the share results link to see my output. I can only add 2 links as new user on these forums. ?result=29175-9045-7411-3319&identityId=unknown

Even for regions close to me with not much gain I’m still uploading to s3 via however they do these tests at ~400Mbps. Via odrive I rarely see anything above 10Mbps.

Hi @aaron.caito,
There are ways we can optimize our upload to S3 by performing chunked uploads to create concurrency, even for a single file. These are slated as enhancements to the S3 integration. Our next major version of odrive will also allow file concurrency tuning, to allow you to set the number of files, overriding any defaults.

Right now there really isn’t anything that can be done to override the sync engine’s natural concurrency limitation. The CLI interacts tells the sync engine what to do, but it is fairly coarse grained, and not granular enough to dictate concurrency.

Glad to hear concurrency management is on the roadmap! (Is there a public roadmap anywhere?) Subscribed today.

Hi @aaron.caito,
We do not have a public roadmap available, but there are a ton of things in the works and we are pushing hard to get the next huge release out.

We will announce any relevant news in the Announcements section, which you can subscribe to for updates: