The git worktree
method described in comments will work on a Unix/Linux system, but probably not on Windows if your different users have different accounts (which they should, for sanity if nothing else). It has some drawbacks: in particular, while each working tree gets its own index, all the working trees share one single underlying repository, which means that Git commands that must write to the repository have to wait while someone else has the repository databases busy. How disruptive this might be in practice depends on how your users would use this.
It's usually a much better idea to give each user their own full clone, even on a shared server, and even if it's a Unix/Linux system. They can then push to, and fetch from, one more clone on that shared server, that you designate as the "source of truth". There is only one drawback to this method of sharing, which is that each clone occupies its own extra disk space. However, this tends to be minor: when cloning a local repository locally, using file-oriented "URLs":
git clone /path/to/source-of-truth.git work/my-clone
Git will, to the extent possible, use "hard links" to files to save space. These hard links work quite well, although the links "break apart" over time (as files get updated) and gradually the clones wind up taking more and more space. This means that every once in a while—say, once a month or once a year or so, depending on activity—it may be helpful for space purposes to have your users-on-the-shared-server delete their clones and re-clone at a time that's convenient for them. This will re-establish complete sharing.
(Of course, disk space is cheap these days, and the right way to handle this on a modern Linux server is probably to set up a ZFS pool with, say, a bunch of 8 or 12TB drives in a RAIDZ1 or RAIDZ2 configuration to get many terabytes of usable storage at around $30 per terabyte, even counting overhead and the cost of cabling and so on. You'll probably pay more for some high end Intel CPU than you do for all the drives put together.)