The accepted answer fails to answer a secondary aspect of the question, specifically:
If so, isn't it absolutely essential to get anything on git to work? (Minimally, how else could the "ordering" of the log history be established? Worse case, couldn't I have a binary file corrupted by simultaneous writes?)
So I'll address myself to just that portions.
There are no simultaneous writes in git, and the ordering of history is part of the answer why!
When you push a commit to a remote git repository it verifies that the commit that you are pushing is a descendant of what the repository already has as the head of that branch.
If not, the commit is rejected.
If it is, then git will send over a set of blobs (file data), tree objects, and commits.
It then updates the branch head to point to your new commit.
Except if the head changed, it will once again, reject your new commit.
If rejected, you have to pull in the newer changes from the remote repository, and either merge them with your changes, or rebase your changes on top of the new ones (e.g. git pull -r).
Either way, you create a new local commit that is a descendant of what the repository has.
You can then push this new commit to the repository. It is possible for this new commit then to be rejected, forcing you to repeat the process.
Files are never overwritten. The file "mybigfile.mpg" to git is just a blob with a unique ID based on a SHA-256 hash of the file contents. If you change the file, that's a new blob with a new ID.
The filename, that is just an entry in a tree object. These also have IDs based on a hash of their contents.
If you rename a file (or add, remove, etc.) that's a new tree object, with it's own ID.
Because these are part of the history (a commit includes the id of the top tree being committed, as well as IDs of its parents), these objects (blobs, trees, commits, signed tags) are only ever added to the repository, and never modified.
Git history is always ordered, because it is a linked list, with links pointing to the parents. Zero commits for the initial commit, two or more for merge commits, and one otherwise.
And they don't require any explicit locking, because git checks for conflicts before making a change.
Only the file that contains the ID of the head commit on a branch needs to be locked, and then only for a brief moment between checking for changes and updating it.
Locks in git-lfs address a very different problem. Binary assets usually cannot be merged, and often involve large amounts of work in changing them.
This is especially true of large assets.
If two developers start making changes to the same file, one set of changes will have to be discarded, and then recreated starting with the other change as a base.
git-lfs locking prevents this from happening by accident. If you encounter a lock, you either wait until later to make your changes, or you go talk to the person who has that lock.
Either they can make the requested change, or they can release the lock and allow you to make your change on top of their changes so far. Then when you're done, you can push your change and release the lock, allowing them to continue with your change.
Thus, it allows changes (the entire developer process, not file-writing) for specific files to be serialized, rather than the parallel-then-merge paradigm used for textual source files.