I like this passage from Wikipedia:
They may or may not be generated from random (or pseudo-random) numbers. GUIDs generated from random numbers normally contain 6 fixed bits (these indicate that the GUID is random) and 122 random bits; the total number of unique such GUIDs is 2122 (approximately 5.3×1036). This number is so large that the probability of the same number being generated randomly twice is negligible;[citation needed] however other GUID versions have different uniqueness properties and probabilities, ranging from guaranteed uniqueness to likely duplicates. Assuming uniform probability for simplicity, the probability of one duplicate would be about 50% if every person on earth as of 2014 owned 600 million GUIDs.
https://en.wikipedia.org/wiki/Globally_unique_identifier
If you are truly concerned you always have the option and ability to create a collision detection method. Such as if you detect that a GUID is already in use then just assign a new random GUID and iterate this until no duplicate is detected. Reminds me of hashtables so to speak. There are performance penalties for this, but just know there is a solution for your obstacle!
Update
I can understand your concern about randomness, but if you take into account it is a structured algorithm it will almost assign in a sequential chronological matter (kind of). There is a minimal concern pertaining to this, I would just be more concerned with the performance degradation surrounding using a 128-bit value for a primary key resolvement.
Also from Wikipedia:
GUIDs are commonly used as the primary key of database tables, and with that, often the table has a clustered index on that attribute. This presents a performance issue when inserting records because a fully random GUID means the record may need to be inserted anywhere within the table rather than merely appended near the end of it.