I am looking for a cloud client that automatically syncs with different types of cloud providers. I’ve tried rclone however, it has it’s limitations as rclone mount is experimental. Then I’ve stumbled over to ODrive which I had hoped would have solve this problem. The documentation here is overly sparse and the blog post here lacks clear examples and it’s overly complicated.
I started the odriveagent which too me seems pointless to have because whatever functionally it provides can be provided by the odrive client alone. Then there is this language about “placeholders” which don’t quite make sense to me. Then I mount which I had assumed was a regular mount point handled by the kernel because there is no clear documenation that explains clearly what it does with odrive.py mount /home/***/Cloud/OneDrive /OneDrive/. To my dismay, the folder is empty, am I missing something here?
An odrive agent mount is essentially just a relationship that is defined between a folder on your local system and a folder in the remote storage. The remote storage needs to be linked to your odrive account.
What you wrote above should work, if you have linked your OneDrive account to odrive. Listing the contents of the /home/***/Cloud/OneDrive should show the contents of that folder. If it is empty, try issuing a “refresh” on the folder.
Important Note
After getting the above to work, you may find that odrive still isn’t going to provide what you desire. Given that you originally wanted to use rclone mount, which uses FUSE to provide a virtual filesystem, I assume you are trying to perform more of a “direct access” to the cloud storage. odrive is a sync engine, so it will sync the specified content to the local system for access (and vice versa), instead of trying to stream it directly, for example. I know there are many folks who want to use Amazon Drive’s unlimited storage as a rudimentary streaming server for their media library. odrive can’t perform this type of realtime streaming functionality, at least not currently.
Sorry it took this long to respond to this thread. As frustratingly confusing it was to me initially, I had decided that I would just simply run the ODrive desktop client on my Windows Server and cifs mount to that directory to keep things in sync. There I figured out the purpose of opaque Placeholders. I also better understood odrive.py sync and odrive.py mount. However it doesn’t have the dersired behavior. Placing new files or folders in the sync folder doesn’t automatically sync with the cloud? Which resulted in me losing data because I had assumed the data was automatically synced, which I assumed was the purpose odrive agent?
I get this: Unable to sync /home/hunter2/Cloud/OneDrive Already an odrive folder.
Technically, I just wanted files and folder to synced automatically when they are modified to avoid data lost. It doesn’t necessarilly have to be FUSE mounts.
Do I have to run this command everytime I modify, add, or remove contents from the sync directory?
Thanks, although I understood Placeholders now after experimentation, the documentation is still lacking.
EDIT
isActivated: True hasSession: True
email: lutchy.horace@outlook.com accountType: OneDrive
syncEnabled: True version: prod 916
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0
Hi @lutchy.horace,
So it looks like you are mounting an odrive folder on a Windows machine? Is that correct? You do not want to have two separate sync engines working on the same exact data. In this case you have odrive monitoring the folder on your Windows server, correct? If you try to sync to that same mount using the Linux agent, its going to error because it can identify that the Windows server is currently monitoring that same folder.
Since you have it set up this way, you actually should be able to just add files to the mount on Linux and the Windows server will pick it up, without having to run the odrive Linux agent at all.
What is the path to that folder, as it is seen by the Windows server?
Hi @lutchy.horace,
When you shut down the agent, you do not need to unmount first. When you start again it will mount what was mounted previously.
I think the sync scenario you described is just a matter of refresh. We do not have a method to monitor filesystem events on Linux, currently, so odrive will actively scan the defined mount occasionally, but not continually, since it can be expensive.
You had asked this above, but I was preoccupied with figuring out the setup. To answer your question, if you want something to sync up immediately, call refresh on the folder you placed the item into. That will tell odrive to look in that location for changes, immediately. odrive should pick it up then and sync. If you don’t do this, it should sync up on the next scan, but that could take up to an hour.
Then the question is, is it turnable when it scans? Interestingly, when I modify a file that has previously been synced, it appears it sync’s the file imeddiately. Atleast according to odrive.py syncstate [path]? I personally don’t mind it being taxing and hour is way to long. Then again, I assume it will sync when the agent is started, in circumstances when I shutdown the PC before the next sync? Which will unfortunately leave the changes un-sync when I move to another device .
Perhaps I can create a systemd unit service with a timer that calls odrive referesh that scans every 30s or so. Since I don’t have many files that needs to be scanned for changes, not really ideal but doable.
Back to rclone mount, I can’t open documents when the program needs to seek, which is another reason I am looking for alternative.
Any plans to implement such method, atleast via inotify?
Hi @lutchy.horace,
There isn’t a way to override the odrive interval currently.
For scheduling refreshes, I have implemented something similar to perform a recursive refresh on an Linux odrive structure every five minutes. You just need to keep in mind that a refresh will check both local and remote, so it can get expensive if the structure is large. Also keep in mind that the CLI refresh is not recursive and only refreshes the folder specified, so you need to add in recursive logic yourself.
I’m not sure on this one. Is the file actually synced, and the new content is on the remote now? It’s possible that a remote event triggered a local refresh which picked up the change prior to the local scan. It could also be that it fell into the local scan window at that moment.
Also, to clarify what I said above, the normal, expected interval for picking up local changes is 15 minutes. Depending on what else is happening on the system and when the change is made in relation to Agent startup, it can be longer (worse case between 30-60 minutes), but generally it should be a 15 minute interval.
Actually doing this is pretty easy with a simple find command. Here is an example from another user: