I am trying to sync a Google Drive subfolder called !_Ebooks to my laptop running Linux Ubuntu. I have already asked how to do it directly odrive guys via facebook and I got some general one liner which does not work or I can’t figure out how to adjust it to my specific scenario.
I have synced the main GDrive folder and it’s showing local placeholders with .cloud and .cloudf extensions but how do I get the whole subfolder with all files in it?
here is the one liner “exec 6>&1;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Dropbox/” -name “.cloud” -exec python “$HOME/.odrive-agent/bin/odrive.py” sync “{}” ;|tee /dev/fd/6); done”
Hi @djxpace,
You will need to change the “$HOME/odrive-agent-mount/Dropbox/” part to the folder you want to sync. I’m guessing that is something like “$HOME/odrive-agent-mount/Google Drive/!_Ebooks”
thank you for the reply. That’s exactly what I have tried and I got an error, that’s why I have created this thread.
here is the error including the one liner:
$ exec 6>&1;output=“go”; while [ “$output” ]; do output=$(find “$HOME/odrive-agent-mount/Google Drive/!_Ebooks” -name “.cloud” -exec python “$HOME/.odrive-agent/bin/odrive.py” sync “{}” ;|tee /dev/fd/6); done bash: !_Ebooks: event not found
the placeholder for !_Ebooks is there “!_Ebooks.cloudf” so not sure what’s happening.
The ! is used for history expansion in bash, so bash is going to barf on that character, thinking that you are referencing a historic command. You can turn off history expansion with: set +H
and then running your sync command. If you want to turn history expansion back on, it’s set -H
Much appreciate your help. I was not here all this time so I didn’t try until now. After the one liner now I have no error message or anything but I have used the full name !_Ebboks.cloudf as without the cloudf I get the same error that it can’t be found. Not sure what I’m doing wrong but obviously something is not playing nice.
Any other tips?
thanks a bunch
Ahh. Okay you will first need to sync the !_Ebooks.cloudf folder so that it is expanded into a “normal” folder, then run the sync command. So like this:
Tony,
excellent, thank you, I had some progress and the folder !_Eboks looks like a folder now but anything in the folder is not synced even after step 3 you have provided. Step 3 actually has no output whatsoever. I have tried manually sync one subfolder but any files in it have not been synced. In !_Ebooks folder I have many subfolders and files in them. How do I do that as a batch? Or do I need to sync each folder and file separately? Thanks again
I have successfuly synced files in one subfolder one by one, but that’s not what I’m after I need to sync the whole folder !_Ebooks as it is in one go, not one by one as there are hundreds of files and many many subfolders. Is there a way? Thank you.
I just realized what the problem was here. The asterisk got dropped somehow as we were copy and pasting these commands. So, this command will sync all files in a folder:
exec 6>&1;output="go"; while [ "$output" ]; do output=$(find "$HOME/odrive-agent-mount/Google Drive/!_Ebooks" -name "*.cloud" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync "{}" \;|tee /dev/fd/6); done
This command will sync all files and folders in a specified folder (what you want to do):
exec 6>&1;output="go"; while [ "$output" ]; do output=$(find "$HOME/odrive-agent-mount/Google Drive/!_Ebooks" -name "*.cloud*" -exec python "$HOME/.odrive-agent/bin/odrive.py" sync "{}" \;|tee /dev/fd/6); done
You are the champ, thank you so much, I knew there was not something right and I started to play with the commands but I wasn’t sure what it was.
Last question, everytime I upload, erase, edit a file or folder do I need to run this command or something else?
Thanks a lot
As long as the agent is running it will pick up and sync any local changes. On Linux the pick-up time is not as fast as on Windows and Mac. You can speed the detection up by issuing a “refresh” command on the folder where the changes were made. So, if you made a change in !_Ebooks you would use this command: