Unsyncing and resyncing worked like a charm. I haven’t had any issues running the script while oDrive was closed, and right now it and Amazon are chugging away syncing the folders I’ve renamed without any apparent issue. So thanks!!
As for whether any of the files had been relocated: No, not for this folder. All of it’s file’s names were well-behaved, and the holding folder didn’t receive anything new. If that conditional might be relevant, I suppose I’ll mention that there was one more: each file name is also checked to see if it already had the new naming pattern so that it can be skipped over if so.
Here are the script’s other idiosyncrasies for thoroughness. I can’t see why they’d be relevant to why the glitch might have occurred, but then I’m also a newb, so I can’t see why my not being to able see it should be particularly relevant either:
(1) the full script was set up to loop through all subfolders of a particular folder. It is the files of each subfolder that are getting renamed. This outer loop worked well at first, but for reasons that I have for now stopped trying to fathom, once it got to the folders with massive amounts of files, it started terminating itself upon completing a folder.
(2) when i first started the project, having noticed oDrive occasionally get confused for awhile after bulk renames using a 3rd party app, I incorporated a delay after each rename or file move action. I guess I was hypothesizing that it would help oDrive keep track of things by breaking it up. I was definitely operating under a false hypothesis that oDrive would respond to the changes most efficiently if it was active and connected to the web while the changes were being made. Then I started experimenting by reducing the delay time to see how well the syncing operations would keep pace. Once the issues on Amazon’s end that started this string had been resolved, things were going smoothly enough that I decided to remove the delay to see what would happen. The first time it all went well: it took awhile for all the syncs to carry through to the web client, but I didn’t have any issues. The glitch that we’ve been talking about the past few days happened the second time I ran the script without a delay. Well, not quite without a delay: the script still contained the delay command, but its duration was set to zero.
(3) Given that I was experimenting with delays and working over a lot of files, each folder was added to a list in a txt once complete. This let me terminate the script as needed and restart it later: the outer loop checks each subfolder against the list so that it can skip it if it’s already been processed.
I don’t know if any that is helpful. If you’ve read through it all and concluded that it’s not, sorry for making you wade through it…
And, once again, thanks a lot for the advice! It saved the day.