Put Cloud Mapping + Info INTO Placeholder Folder/Files

Presently it appears that the oDrive PC/desktop app maintains cloud-mapping info for unsynced placeholder folders/files (I’ll call these “objects”) in a database/list (json?) located somewhere within it’s program space on a person’s PC desktop environment.

(I reach this simple conclusion because of two important observations down here in userland: 1.) the placeholder objects are zero sized, thus they contain zero information; and 2.) It’s not possible to move these placeholder objects - doing so breaks their ability to re-sync back to their referenced cloud-resident objects; Thus, the mapping for each placeholder object is created and resident in a static DB/List used by the app located somewhere in the PC/Desktop environment.)

Placing the placeholder objects cloud mapping info INSIDE of the placeholder file (thus making it larger than zero bytes in size) (either in-addition-to or instead-of the present DB/list approach) could enable a load of kool new capabilities presently absent in oDrive:

1.) placeholder objects could be moved and re-synced to ANY new/different location anywhere within a user’s PC desktop environment (and/or also onto any attached/mapped ext HD);

2.) original file-size info (and other useful original-file attributes) could be embedded within the placeholder object (in addition to the cloud-mapping/remote-directory-tree info) enabling reporting of this useful search meta-info back to userland;

3.) embedding oDrive UserID info could enable a user to move or copy any placeholder object of his on any platform over to any of his other PC / server / tablet / smartphone devices also running oDrive. (When the oDrive app on any of these devices receives a request to resync a placeholder file, this embedded UserID info can help to map back to the initially performed cloud-device authentication store.)

If such capabilities were ever to make it into oDrive, the maximum placeholder object size should be kept to less than the minimum sector size supported by the userland desktop OS… in Win environments, this value is either 1k or maybe even 512 bytes. The benefit to keeping the placeholder object size less than this value is that this will encourage creation and use of HD partitions optimized for placeholder particular use.

Today my media files are stored on local HD partitions all set to the maximum sector size of 64k. For these (usually) large sized files, this large sector size makes the best (most storage efficient) sense. As my use of oDrive placeholders on these HDs grow however, the 64k sector size (to hold the un-synced placeholder objects of zero size) will grow ridiculously onerous… I’ll loose a large amount of storage space to all these “zero sized” 64KB placeholder objects!

Since these small placeholder folder/files are so very special and important, they warrant their own specially created and dedicated HD partition (perhaps my local/internal HD space I’ll designate as my “D” drive). So, rather than waste 64KB on each (either empty or very small sized) placeholder object, I’ll instead only use either 512 or 1024 bytes for each… eventually as the count of these placeholder objects grows, I’d be saving a very large amount of local HD space by only storing these special oDrive placeholder objects in this special HD partition space.

If anyone’s concerned about forgetting where in one’s local expansive storage space these placeholder objects were originally created to represent an un-synced relation to an external cloud-stored object - they can either rely on interrogating a placeholder objects embedded meta data on this, or they can also replicate their expansive local directory tree structure into the special “Placeholder Object Drive D” space… Once they want to re-sync any placeholder object, users will either have to manually move the placeholder object back to it’s original (or new/different) directory location (which will be using much-larger/more-efficient sector sizes than the “D” partition), or perhaps the oDrive process can somehow help out with these details… making this process a bit more of an “automatic” type experience for the user…

I think this enhancement in the oDrive system design could bring an explosion of new cloud-management capabilities down to users’ desktops… and other (more often offline) devices like phones and tablets also.

1 Like

I see where you’re going with this, @amazon1, but I don’t think it will work quite as smoothly as you envision.

odrive works on a principle of mapping a local folder structure to a corresponding remote folder structure.

Take this simplistic use case for example:

I have my odrive ACD folder full of .cloud placeholder files:

~/odrive/ACD/ebooks/fiction/hamlet.pdf.cloud
~/odrive/ACD/ebooks/nonfiction/dictionary.pdf.cloud
~/odrive/ACD/videos/tvshows/pilot_episode.mp4.cloud
~/odrive/ACD/videos/movies/breakfast_club.mp4.cloud
~/odrive/ACD/shakespeare/

This corresponds with the “root” of my ACD folder structure in the cloud:

{amazon cloud}/ebooks/fiction/hamlet.pdf
{amazon cloud}/ebooks/nonfiction/dictionary.pdf
{amazon cloud}/videos/tvshows/pilot_episode.mp4
{amazon cloud}/videos/movies/breakfast_club.mp4
{amazon cloud}/shakespeare/

We then move the local ~/odrive/ACD/ebooks/fiction/hamlet.pdf.cloud file to be ~/odrive/ACD/shakespeare/hamlet.pdf.cloud … that’ll work just fine. The corresponding ACD “cloud” location exists.

However, what if I move ~/odrive/ACD/ebooks/fiction/hamlet.pdf.cloud to ~/Desktop/hamlet.pdf.cloud ?

odrive only knows what to do with .cloud files inside the local ~/odrive/ACD/ structure. ~/Desktop is outside of that structure. What does that correspond to? It doesn’t have any matching equivalent in ACD cloud space. Embedding the ACD cloud location data in the placeholder file still wouldn’t help, because that doesn’t address what to do with the cloud copy of the file. At best odrive client would just make note of the move to non-mapped local space and pretend it didn’t happen. More likely it would see "oh, there used to be a file named ~/odrive/ACD/ebooks/fiction/hamlet.pdf.cloud and now it’s not there, I should delete it from the corresponding {amazon cloud}/ebooks/fiction/hamlet.pdf location.

2 Likes

Good insight @thealanberman.

@amazon1, placeholders are an definitely an interesting concept and the ideas surrounding them can incite some very creative and unconventional thinking for possibilities. I understand the ideas you are bringing up and they are rooted in some good logic.

It goes without saying that there is a fair amount of complexity in creating a universal sync system. There are explicit boundaries and necessary predictability that needs to exist for odrive to make the right decisions. Sync is about making disparate targets consistent with each other and with an authoritative source. The flow has to work in all directions and the relationships need to be strictly maintained. Every object being synced, in each of the separate sources, needs to have an understood relationship with the authoritative source, at all times.

Because of these requirements, handling arbitrary relocation of an object is very challenging. The relationships have to be maintained so that the sync engine can understand where the object went, what it looks like now, and how it corresponds to the data it was previously in a relationship with. Not only that, but it also has to understand its relationship to its parents. What will it have to do if you move it around again, or move its parent structure around? What if you rename it? What if you sync it locally and then change it, move it around, or delete it?

Let’s take the example of storing the metadata in the file, itself, and using that to determine where it is supposed to exist in the sync universe. Its possible this could work, to a degree, until you start thinking about how to continue to track this file through its life:
The sync engine would need to scan the entire filesystem, all the time, to pick possible, arbitrarily located placeholders. Then it has to open it up the placeholder to read the content and determine what it relates to. Possible, although really inefficient. Then what happens when you rename or move the file on the remote side? How do you reflect that change on the local side? How do you even know what this “new file”, that just showed up, actually corresponds to?

To take it further, what happens when you sync that file locally? There is now no more metadata anchor, since you have wiped it out and replaced it with the actual file. If you now edit it, what should the sync engine do? What if you move it, rename it, or delete it? There is nothing to anchor it into the sync relationship anymore. At this point you would need an alternate, co-existing method to track this file, independent of the placeholder file metadata. Now you have two independent tracking systems that are needed for sync to work… really complex. With that second tracking system you will still need to be monitoring the entire filesystem to figure out if anything moves anywhere.

Even further, what do you do inside the odrive folder now? Do you continue to represent the file as it looks in the cloud? It would be strange not to, as you do want an official view of that locally. Now you can have at least two placeholder files pointing to a single remote object. Copy the placeholder and now you have three, and so on. Each one needs to be tracked against the remote object. Anytime a remote operation is done to the file, it needs to be reflected locally, to all of the local instances. Change a local instance, and it needs to be reflected to the other locals and the remote… and I think my head just exploded :scream:

Even leaving all of this aside, the UX implications are really ugly. The user now has to remember individual file relationships, since they have no path similarities or any solid remote<->local path relationship that makes visual sense. You will have cloud-backed files mixed with pure-local files and then need to remember which is which. We think that the concept of placeholders is complex enough… :slight_smile:

1 Like

Hi thealanberman:

Your use-case example greatly aids my understanding about the way oDrive works… ahead of my playing with it to discover the same. The big design philosophy you opened my eyes to is that the oDrive “placeholder” object actually transcends that name… With a “normal” placeholder object (like a soft link back to a real object sitting on a Linux/Win cloud server/host) I don’t expect that anything I do with the object in my userland space is going to have any effect with the real object back in the cloud… particularly, I don’t anticipate that I have the ability to move the location of the real object within it’s real cloud/server directory tree space via my moving the placeholder object in my desktop environment directory tree space.

With oDrive enabling this kind of object movement-linking, it’s client-side desktop “placeholders” are really something more than placeholders. I’m not sure what to call these… I guess there are “mere desktop placeholder objects” and “oDrive Movement-Enhanced Desktop Cloud Placeholder Objects ™”… ha-ha!

In my idealization of “mere placeholders”, I don’t expect this ability to ever move real objects out on the cloud/server… At some common user level, the particular location of any cloud/server real object is an abstraction layer to me… I want my desktop placeholder object to isolate me from the real object directory tree location details out on the cloud/server. The oDrive Placeholder objects, in comparison, are providing me this additional strength to move the remote cloud/server real objects physically around out there in their real environment… but like El Joral (or some other comic-book Super Hero) once said - “With Strength… Comes Responsibility”, and likewise with this oDrive capability, now I’m forced to start paying attention to this cloud/server object directory tree location - that is, if I want my oDrive placeholder objects to continue working.

If I had the ability to move my desktop placeholder files around in my desktop directory tree structure willy-nilly (and even have multiple copies of the same placeholder object stored in multiple/different locations in my desktop environment), I can fluidly/quickly build a myriad of different desktop views onto my cloud storage. The way that oDrive works, it looks like I can build the same desktop myriad of views… but not too fluidly: at each desktop location that I want a new view, it can take a significant amount of processing for oDrive to build the new placeholder-objects back-linking to the real cloud objects… and then this desktop oDrive placeholder object is FIXED in one spot… if you want to move this object to a new location on your desktop directory tree, you essentially have to discard all that previous oDrive processing work and just start that onerous re-mapping process all over again.

Maybe these thoughts of mine are a bit too old-hat and I have to come around to accepting the cloud-era reality that objects on the server/backend might be subjected to being “moved around” (via oDrive but also the many “remote cloud mounts” popping up in new software everywhere today) as readily as objects in the desktop environment. I want the flexibility to move my desktop placeholders around my desktop environment willy-nilly… but… I have to admit that it’s actually more of a priority that they simply continue to work.

Hmmm… A new idea: How about instead of my initial thought to embed the cloud object mapping info into the desktop placeholder… we instead embed an index# value that references a mapping record line/entry in an oDrive locally maintained DB/list? Where a user conducts their desktop object and cloud object “movement activities” in the shadow of the oDrive oversight, these object location changes are duly recorded and immediately (or just soon/quickly) available for reference/inspection by placeholder objects. Would a change like this allow oDrive users the greater flexibility of moving desktop placeholder objects around willy-nilly (as I’m clamouring for) while also preserving oDrives present ability to enable users to move around cloud objects via their desktop regular OS file movement tools?

Even with this kind of enhancement, however, the scheme will also be subject to be broken where a user might use a tool to effect a cloud object move that the resident oDrive process doesn’t see and isn’t aware of. Maybe this challenge could be solved by providing a facility to have oDrive scan user-indicated important cloud storage areas (NOT the entire cloud environment!) looking for such surreptitious object movements?

Hi Tony:

Does my reply to thealanberman above clarify my understanding on most of the points you’re raising? Essentially, my proposal to embed the mapping within the oDrive desktop placeholder itself happened before I had a better/correct understanding of oDrive enabling the desktop to move objects out on the cloud/server. Now that I understand that, I see how my proposal as originally stated doesn’t hold water.

But… I still desire the ability/flexibility to move my oDrive placeholder objects around in my desktop environment willy-nilly… and my new idea on how to achieve this is mentioned toward the end of my reply to thealanberman…

Instead of embedding a hard/fixed mapping within the desktop placeholder object, we instead embed an index# (created by oDrive, stored in a desktop-local DB/List) that maintains the mapping info back to the desired cloud object. In this scheme, the location of our desktop placeholders is not important… they can be anywhere in the desktop environment any number of times. When a desktop placeholder object is called, it determines the latest location of it’s associated cloud object by presenting it’s embedded index# to the local oDrive DB/List maintaining this up-to-date mapping info: The latest mapping info is looked up on the local DB and then returned to the requesting/calling oDrive desktop placeholder object.

This scheme seemingly gives me the willy-nilly flexibility (to move placeholder objects around in my desktop env) I’m looking for … while also preserving (at least) the present (cloud object movement via the desktop) capabilities of oDrive. Of course, cloud object movements that happen outside of the watchful eye of oDrive are again likely to break things here… so oDrive might need some sort of "periodic scanning of cloud object locations " to refresh its local object mapping DB…

Another very kool capability this scheme might buy us is the ability to rename the placeholders on the desktop (but not necessarily the corresponding object name out on the cloud). If the relation between the desktop placeholder and the cloud object is critically just the now embedded local mapping DB index#, then the local name of the desktop placeholder object becomes unimportant (for background mapping purposes)… I can change it to anything that makes better sense to me… and if I have multiple copies of this same desktop placeholder object in various locations in my local directory tree structure… in each place the placeholder appears… I can give it a different (more meaningful to me in it’s new/different context/location) name.

Now THIS is the kind of willy-nilly desktop placeholder flexibility I’m clamouring for !

Your thoughts?

Okay, I agree the word ‘placeholder’ may be a poor choice of term. Let’s call the .cloud and .cloudf files Cloud Representational Objects (CROs).

While I think I understand the technical aspects of what you’re asking for… the DB reference numbers inside each CRO rather than the userland file name and location being the key value that oDrive looks at, I’m having a hard time understanding why this would be useful? What is the workflow situation where it would be useful to move or rename a bunch of 0 byte CROs such that they no longer match their corresponding cloud based file and folder structures?

Tony:

You questioned:

In my latest proposed scheme, oDrive doesn’t need to know about the existence of any of these many newly copied/created desktop placeholders… UNTIL… any of them first call-in with a mapping index# request… then at that time oDrive needs to ascertain and record the local directory tree location of this calling desktop placeholder object… and create an entry in its local DB of local desktop objects that it needs to watch for sync purposes. This process probably wouldn’t be much different than what you’re doing today… somehow oDrive monitors the local object for any change (like a file that was just locally “saved” with a few new edits), and once this happens, it kicks-off a background sync with the corresponding object out on in the cloud…

My knowledge of the oDrive universe is still incomplete, and I’m having a challenge understanding the seemingly hard linking oDrive provides for movement of file objects that reside in similarly named desktop and cloud directory tree structures. In many cases (in ALL cases??) it seems that oDrive is striving to eliminate any abstraction between these two.

My local desktop env and my cloud environment represent (traditionally) two different worlds, and I never felt the overwhelming need to keep the two disparate directory tree structures in perfect unison. oDrive forcing this rigid reality (if so) might prevent my (willy-nilly flexibility) proposals…

I’m still learning over here…

thealanberman:

Thanks for the “CRO’s”. I’ll run with this…

I sense (quickly) two prospective uses for a CRO local renaming capability:

1.) simple (very simple?) cloud object security: rather than offering prospective cloud peepers my real object names like “mypasswordfile.txt”, “myinvestmentaccountnumbers.xls”, “myangrynotetomyparents.doc”, I’d like to store my files on the cloud using lengthy meaningless strings (with no extension values), things looking like md5 hash values. Maybe just sometimes, maybe not all the times. Back in my desktop env, I’ll rename the cloud hash-looking string to “mypasswordfile.txt”, etc. I’m not looking for heavy-duty security here… just something very simple and very fast to thwart prospective (non-state?) peepers. Obviously, anyone using serious-class forensic tools will quickly break this kind of “security”, but I’m not looking to defend myself against the Big Boys here;

2.) I like the idea of being able to use the local filenames as effective “tags”… so the same underlying CRO can be named different “tag values” willy nilly in my desktop env. If I have files related to a project I have several people working on, the ability to rename the same CRO different (more useful/meaningful names) would help me be able to better find it down-the-road when I’m searching through thousands of similar files…

@amazon1, From what I can tell, it sounds like you aren’t looking for sync, you’re looking for something more like cloud backup.

The problem with other sync clients (Google Drive, Dropbox, etc) is that if you have 2TB of data in your cloud storage, you have to have a corresponding 2TB drive locally. This is the problem oDrive is specifically trying to address. How do you sync an infinite amount of cloud data from a laptop with only 100GB of drive space?

The answer is, you don’t sync everything all the time, you sync only the stuff you need at any given moment. Hence the “unsync” concept that oDrive has introduced.

oDrive’s strength is as a sync client. To borrow from Allie Brosh, it syncs “all the things”. If you move something locally, that change is sync’d remotely. If you want a completely different folder structure in the cloud than what you have locally, you’re probably better off using a one-way uploader, rather than synchronization. (See Amazon’s in-house uploader application.)

With regard to your desire to obfuscate the data you have in your cloud storage while still maintaining whatever naming schemes you wish locally, it sounds like you’re looking for encryption. I believe oDrive is working on (maybe already available?) an encryption feature. Alternatively, I believe products like Boxcryptor have been doing this sort of thing for some time.

thealanberman:

The reason I forked over $ for a one year pass to this party is that I got really excited when I first experienced the oDrive placeholder objects (and their corresponding sync/unsync “compression” behaviors) in my desktop env.

I eventually figured out that I can work these syncs in both directions (this was a revelation to me!) either to or from the cloud to create local CRO’s. This enabled me to create “C” resident CRO’s of all the data migrating off of my stack of ~12 different HDs onto ACD via oDrive (up link to the cloud via my residential Comcast/Xfinity net service here is relatively slow… so this will actually take a few months to complete).

I’m actually very happy with the way this is working. My $ to join this club for 1 year was very well spent for this excellent functionality.

With oDrive’s present ability to recreate CRO’s anywhere in my local directory tree pointing back to anywhere in my ACD directory tree… my suggestions to change or enhance the local CRO’s implementations (such as for my oft requested “willy-nilly flexibilities”) may be practically superfluous.

I don’t want to encrypt my cloud content… I don’t want the storage overhead, and I’m not looking to attract undue attention from our benevolent BB’s (just like all other content, encrypted files contain header info essentially identifying their content type). Simple obfuscation of the file name and ext is the way to go for me… but I’ll give this up (at this time) to preserve the working functionality of oDrive… but maybe I can hack some personal solution for this down the line…

I’m still a bit confused by your remark here (and similar remarks by others elsewhere) that if I [quote=“thealanberman, post:9, topic:912”]
want a completely different folder structure in the cloud than what you have locally, you’re probably better off using a one-way uploader, rather than synchronization
[/quote]

I create a new folder anyplace I want on my local directory tree, I right click on it, select the option to “Sync with oDrive”, I indicate the path to my ACD content, then the magic happens. I don’t see any necessary commonality between these two separate (local vs cloud) directory tree structures… and oDrive handles this desired mapping correctly without any problems. My local path may be something like “C:/Users/BB/MyVideos/Vobs/”, my cloud path may be something like “ACD:/goodbot/vids/vob”. So these directory tree paths are materially different from one another, but the oDrive mapping is working correctly as expected.

So given this simple example, why is it being said that the local and cloud directory tree structures have to be the same? Maybe the two separate directory tree structures need to be the same in the circumstance that I wish to (more fluidly) move CRO’s around on my desktop? Perhaps I’m exchanging/swapping that flexibility for my desired ability to put the oDrive CRO’s anywhere on my desktop. It’s like “OK… you want to put the CRO’s ANYWHERE? OK… you can do that… BUT then you have to KEEP them there… no moving these CRO’s around!”.

I’m getting this. So if I DO eventually want to move the CRO’s to someplace else in my local directory tree structure, I essentially have to delete the previous CRO’s (remove the sync) and recreate new CRO’s at the newly desired local location. That’s not asking too much, and so far this process is reasonably quick for me. I can live with this… It’s really fine the way it is.

Appreciate the discussion from everyone here. The intricacies of the sync model and our current reluctance to muddy the UX waters with even greater remote<->local abstractions mean that we won’t be pursuing this type of unconstrained model. However, taking a step back, I think this type of conversation lends itself to to an over-arching idea (or philosophy, if you will) of “living in the cloud”.

With the introduction of cheap, theoretically unlimited storage, the lines can be blurred almost completely between “local” storage and “remote” storage. For example, what if all of your non-application, non-OS data existed in the cloud? If all of the data you cared about was in Amazon Drive, you could actually move things around to any location you wanted. The difference, of course, would be that the local structure moves would correspond to remote structure moves (and I could maintain my sanity :slight_smile: ).

As an example, I already have my default OS folders all mapped to corresponding locations in Amazon Drive. So, my Desktop, Documents, Downloads, Photos, Videos, Music local folders are all mapped to corresponding folders in Amazon Drive. Generally speaking, most of the files I work with are in these locations. This has essentially transformed my Neanderthal-ish system into an evolved “cloud” system, without me having to change anything. Everything I touch is automatically synced up to the remote storage and down to my other systems. I am “living in the cloud”, or at least starting to.

With this type of setup I can move any placeholders to any other locations that are mapped. So I can move a placeholder from my Downloads folder to my Documents folder or to my Desktop. This move is reflected in the cloud and across all of my devices without any data needing to be moved. I plan to extend the mapped folders even further, to encompass all of the user data I have locally. When that is done, I can move anything to any other location and have instant reflection on every device and on the authoritative storage source (Amazon Drive).

With odrive I can attain a sort of “storage transcendence” while still functioning within the organizational constraints and defaults of today’s Operating Systems. To me it was a real ah-ha moment when I saw how seamless and natural my cloud storage use had become. To me this obviates the need to try to mingle cloud-backed files with local-only files and facilitate unconstrained (i.e. willy-nilly) remote <-> local relationships.

1 Like