It's a misleading message to an extent.
What has happened is that the internal data structures of the workspace have become corrupt.
The ends up as the code (in the tf
command, Visual Studio, et al.) to load those data structures failing to load from the relevant files, which becomes an error about a schema version problem.
In the case that I experienced, this was because the machine hosting the workspace ran out of disc space while doing operations upon the workspace of various kinds (check-outs, check-ins, adding pending changes — it was actually a bunch of workspaces being used by TFS 2017 build agents and multiple active builds).
This corrupted parts of the data that are held in the files under the hidden $tf
subdirectory (it always being a local workspace on a TFS 2017 build agent), because source control wasn't able to rewrite/extend these files.
Other answers here discuss partly retaining some of the files, based upon more specific knowledge of what has not been corrupted (such as preserving the internal files storing pending changes if one wasn't creating any pending changes), but the basic idea is that one needs to reset all of the stuff in $tf
to a sane state of some kind.
In my case, I had the disadvantage of multiple potential causes and no consistent knowledge of which parts of $tf
were corrupted, but I conversely had some advantages:
- It being a TFS build, arranged to build from the build agent's
s
(source) directory into its a
(artifact staging) and b
(binaries) directories, there were not masses of non-source-controlled object and other files in the workspace (which is the s
directory) that would have ended up as pending additions.
- There were not any pending changes (to actual source files) worthwhile to preserve. I could afford to lose all information about source files, and indeed all current locally-stored information about the workspace, and simply run the build again with a fresh sane and largely unpopulated workspace. I did not even need to restore source files and directories for the whole workspace, as the first task in any TFS ("vNext") build is a "Get Sources" task that uses (variously)
tf vc scorch
, tf vc undo
, and tf vc get
to check out the right source version.
So simply, in Developer PowerShell (Visual Studio being installed on the build machine):
Remove-Item -Recurse -Force 'X:\Agents\07\_work\1138\s'
tf vc get 'X:\Agents\07\_work\1138\s'
(Note that one can always get at the tf
command in some way on a TFS build machine. Every build agent has a local helper copy of tf.exe
and its ancillary DLLs in its VSTS "OM" subdirectory.)
I possibly could have omitted the tf vc get
step, but having had trouble with "Get Sources" in the past I do not trust it to robustly cope with arbitrary manual external alterations, such as no s
directory when the build isn't configured to outright delete that entire directory itself (as it can be but was not here).
For the same reason, Microsoft's own "agent maintenance" (another way to clean things up) is quite dodgy, and ends up leaking workspaces on the TFS server (which I have raised a bug with Microsoft about).