Method to automate data ripping/scraping

So I’ve recently detailed a couple of ways to going about a data hoarding project.

It’s essentially an attempt to archive whatever Instagram account I am following into massive folder. I use a program that does this for any URL I input.

The current setup I have is contained in a VM.
Right now it’s “Virtual Machine” > “Desktop” > “Instagram Ripping” > “Instagram User blah blah”

I’m trying to find a solution to offload all of the local images into the cloud. I cannot just un-sync (to offload into the cloud) the folders because I’m eventually going to need to re-sync all of these folders so the program can have the correct directory to save to. I cannot point it back to a .cloudf file.

I thought auto un-sync could work but the folder would eventually turn into a .cloudf file from what I understand.

I’m trying to reduce as many manual steps as possible because eventually there will be hundreds of folders and that takes an insurmountable amount of time to manually manage. I wonder if there is a way to get odrive to just shoot whatever data is downloaded (to a specific folder structure level) and move straight to the cloud and then convert all the contents to .cloud files.

I eventually foresee needing this for a separate YouTube scraping project that will reach TBs eventually, which I don’t have that kind of local storage.

Any ideas would be appreciated. If what I am trying to achieve sounds confusing I can re-explain.

Hi @christianorpinell,
Auto-unsync will only unsync files, so ti could work for your purposes, but it is applied to all folders, globally.

Another idea to target specific folders is to use a script, running at certain intervals with a scheduled task. Take a look at this thread for more information: