Detection of SharePoint/Procore File Changes

Hello,

I’m just going through the process of trying to implement odrive as a sync solution on our Windows 2019 File Server. Our objective is to sync projected related files to projects in Procore and overhead/administrative files to SharePoint.

Using the sync to odrive ability, I’ve been testing a few folders over the last few days. The odrive client is extremely speedy at picking up local changes and replicating the changes to the linked cloud location. Changes made in the cloud location are another story. Sometimes they pick up within a few minutes and sometimes they take a few hours (if at all). Changing the same file or a different file in the same folder locally (on the file server) seems to ‘refresh’ the folder and sync anything that hadn’t already. I’m running the odrive client as a service without a user logged in, but the same behavior seemed to be present even when running it normally with a logged in user.

Are there any tips or tricks to getting the cloud storage locations to better communicate changes to odrive? Are there any solutions on the odrive side, other than performing a frequent ‘full scan’? Would using the Agent/CLI provide additional options? My IT knowledge is limited, so I’ve steered clear of that deployment option so far.

Thank you.

Hi @mgcadmin,
Thanks for reaching out about this. What you are seeing is expected behavior, but can be confusing because different services have different capabilities that odrive can utilize.

There are a couple of methods that can be used to increase responsiveness, but I want to make sure I explain why things are the way they are, so you have a good understanding of the trade-offs:


Some services offer an API that can be used to ask for recent changes. When available, odrive will use this type of API to query for recent changes at frequent intervals, allowing odrive to see remote changes fairly quickly.

For services that do not offer an API for requesting changes (like Procore), or offer the API, but are very restrictive in how often it is used (like Sharepoint), odrive, instead, needs to periodically scan the storage service for changes. This can be very expensive since odrive needs to traverse each remote folder, look at the current contents, and then determine if anything has changed since the last time it looked. Doing this too often can drastically increase local overhead on the system as well as trigger the storage service to severely throttle API requests, or just start refusing requests, entirely.

By default odrive will perform a remote scan when it starts and then every 14 hours after that. As you have seen, local changes will trigger remote refreshes in those specific folders. odrive will also remotely refresh a folder when the user navigates into it via Explorer or Finder. A user can also trigger a remote refresh with a right-click->refresh. Because of these “on-demand” refreshes, the latency for picking up remote changes on the services that cannot be regularly queried for them is generally not too noticeable during everyday use. Since your use case is on a server, you are not likely to be interacting with the local system too much, like a typical end-user would, so the delay is more noticeable.


With that out of the way, here are the ways you can improve responsiveness:

  1. We have an advanced configuration option that will allow you to decrease the time between remote scan intervals. You can take a look at our documentation here for more information on that setting: https://docs.odrive.com/docs/advanced-client-options#remotescanintervalmins
  2. You can use the CLI to create a simple script to send manual refresh commands for the structure(s) you are interested in refreshing.

Method #1 is easy to do, but it can have consequences (as noted above) if you have very large structures that odrive will need to scan through, so you just need to take that into account.

Method #2 can be used with more precision and flexibility, so you can do things like only target certain sections of the data structure (to reduce overhead and load on the remote storage), or have certain sections refresh more quickly than others, or vary the frequency based on the time of day/week/month, etc…

Sorry for the wall of text here. Just let me know if you would like any more details on the above information, or if there is anything else I can help with.

Tony,

I appreciate the quick and detailed response!

I’m going to give the CLI/manual refresh command method a shot. This way I can refresh folders that are accessed frequently via cloud storage more often and ignore ones that are typically only accessed via cloud storage to ‘read’.

How does Procore respond to the frequent refresh requests, given that there’s no API? Our project files are the ones accessed/written to most frequently via cloud storage. Say, if we tried to refresh the folder structure for a given project every 6 hours (approximately 1,400 folders and 5,200 files)? We could stagger the requests to refresh a different project every hour.

Thank you.

Hi @mgcadmin,
Glad to help!

For Procore, that frequency with that many folders shouldn’t cause any problems. Do you have an idea of how many projects/folders you would need to monitor, in total? I’m just trying to get a sense of the scale.

Tony,

We have 4 active projects at the moment, totaling 4,500 folders and 15,600 files. This is pretty typical for us. I would say that ‘peak’ potential usage for us is 5 projects @ the single project numbers listed above, so 5 projects totaling 7,000 folders and 26,000 files.

Thank you.

1 Like

Thanks @mgcadmin!

Just let me know if you need any assistance as you start looking at using the CLI. If there are details you would rather leave out of the forum here, you can message me directly (click on my name and select “Message”) and e-mail us at support@odrive.com.

A post was split to a new topic: Detecting changes with odrive Agent