I am using the odrive cli on a linux machine to sync files to my Google Drive account. It was easily handling everything I threw at it until I got to a 53GB file to which everything stopped.
Is there a file limit with odrive that prevents really large files from syncing?
Looking at google drive specs it says you can use up to files as large as 5TB.
odrive doesn’t impose any limits, but I know Google has a daily limit on uploads of 750GB . Is it possible you hit that?
I don’t think so. It stopped at 135GB…2 days later it started syncing again and it got to 270GB…I expect tomorrow it’ll get to 405GB.
We’ll see. In checking google, I see some limits but they are all over the place and everyone’s answer seems to be different.
When you see it happening again you can send a diagnostic from the odrive menu and I can see if there are any strange exception that odrive is hitting when trying to upload. Just let me know.
I am using the python CLI agent…does it have any diagnostics I can pull?
I see a log directory, there is nothing in the error file. The backup and agent files don’t show anything abnormal.
Sorry for the confusion. The agent doesn’t have diagnostic capabilities.
What do you see when you run the
status command after syncing seems to stop/pause? Are there any items in any of the queues? If so, can you run the detailed status commands to see if there is any information listed?
To use the CLI commands from Mac:
Open a terminal session (type “terminal” in Spotlight search):
Copy and paste the following command into the terminal and hit enter:
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status
This will return a summary of odrive status. To get more detailed you can add parameters to the status command like this:
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --uploads
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)od…
Background Requests: 1
For the status of uploads, it shows 1 file 100% done, but it’s not showing in Google Drive. Checking the status in odrive, it says it’s still active.
For waiting, it shows the next file that will be uploaded.
My guess is there is some hangup on the file being uploaded where it’s not “complete” but it is.
What command can I use to stop that file from syncing and have it continue with others?
You can add a character like
~ to the beginning of the file to categorize it as ignored (as per the ignored prefixes list here:
https://docs.odrive.com/docs/sync-changes#section--ignore-list-). If it is actually in a bad processing state it may not pick up this change properly, however.
If you send the
shutdown command and then start the agent again, does it process normally after that?
How long does it take to get into this state?
If you wait long enough it ends up proceeding normally again?
If I shutdown and restart, it starts to process normally.
It’s random on when it happens. Sometimes after 100gb, sometimes after 20gb.
Sometimes it proceeds normally, sometimes it does not.
It recently just uploaded by 50GB file, so I think I am good. I will just watch and when it seems to get stuck I’ll shut it down and start again.
Is there a way to limit the uploads to one at a time? When I check it’s doing 2 or 3 at a time, I’d like it to just do them 1 at a time.
Thanks for the update.
There isn’t a way to direct odrive in how many concurrent uploads to do. It has some logic to do that, intelligently, although sometimes it doesn’t make the correct decisions. I have occasionally scripted a copy/move-then-sync flow for times when I needed to control it.
There is a basic script example for editing a file, waiting for it to sync, and then unsyncing it in this blog here:
You could adapt that to copy, wait for sync to finish, then copy the next file, etc… if you wanted to try to control the flow to guarantee only one file at a time.
There are also some other example in “Helpful Tips” here that could be useful:
Odrive Sync Agent: A CLI/scriptable interface for odrive’s Progressive Sync Engine for Linux, OS X, and Windows
Cool, thank you for the tips.
I have a new issue, but I will start a new topic for that.
Thanks for the information.