I do not think that the default behavior of SQL Server is exactly "pessimistic concurrency control". Let us consider the following simple example, which runs under default isolation level, READ COMMITTED:
-- Connection one
BEGIN TRANSACTION;
SELECT * FROM Schedule
WHERE ScheduledTime BETWEEN '20110624 06:30:00'
AND '20110624 11:30' ;
-- Connection two
UPDATE Schedule
SET Priority = 'High'
WHERE ScheduledTime ='20110624 08:45:00'
-- nothing prevent this update from completing,
-- so this is not exactly pessimistic
-- Connection one
DELETE FROM Schedule
WHERE ScheduledTime ='20110624 08:45:00' ;
COMMIT ;
-- nothing prevents us from deleting
-- the modified row
Regarding the following statement from the link posted by gbn: "Historically, the concurrency control model in SQL Server at the server level has been pessimistic and based on locking.", my understanding of what it means is this: prior to 2005 only the tools to implement pessimistic concurrency were provided. Yet we still needed to up the isolation level to achieve pessimistic concurrency, it did not and does not occur by default.
I might be wrong, of course. I have sent Kalen Delaney, the author of that MSDN article, a link to this thread. Hopefully she could find a few minutes to comment.
Edit: here is the MSDN definition: "Pessimistic concurrency control locks resources as they are required, for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion." Clearly this is not happening by default, as I have shown in my example - the shared lock is released after the row has been read.