This system uses virtual memory in process, and to make access to memory, virtual address should be translated into physical. This translation is stored in the same physical memory using some variant of "page table" (https://en.wikipedia.org/wiki/Page_table).
In variant without TLB (TLB of size zero entry), every program access will need to read translation entry from page table before real access can be made. So, effective (average) memory access time will be equal to 2 * main memory access time.
eff_time(TLB_size_0) = 2 * main_memory_access_time
TLB is optimization (check http://ostep.org book for more details of real world TLB), which caches several recently used translations (and every translation entry describes 1 page). In ideal case all virtual addresses used by program will hit TLB, and there will be only latency of TLB added to memory access time. With 35-entry TLB this will be true for programs (or periods of time) when no more than 35 pages are accessed.
But when program does uniform memory accesses and uses more pages (has bigger size in page count) than can be stored in TLB, some accesses will need to do "page table walk" (in case of 1 level page table - 1 additional memory access) to refill some TLB entry. If 1/5 of program memory accesses miss TLB (and 4/5 don't), mean effective access time will be like
eff_time (TLB_miss_rate_of_1_over_5) = (1-1/5) * full_access_time_with_TLB_hit + 1/5 * full_access_time_with_TLB_miss
where full_access_time_with_TLB_hit is time to do successful search in TLB and do 1 main memory access, and full_access_time_with_TLB_miss is time to do unsucessful TLB search, access page table (by reading main memory), (possibly refill TLB, and redo search if you MMU is not optimized) and then doing access of memory which was required by application.