0

I am reading Operating System Concept (9th edition) and at page 443 I have found interesting considerations about demand paging and how this can affect a process timing. Here is what it says:

Assume that pages are 128 words in size. Consider a C program whose function is to initialize to 0 each element of a 128-by-128 array. The following code is typical:

int i, j;
int data[128][128];

for (j = 0; j < 128; j++)
    for (i = 0; i < 128; i++)
        data[i][j] = 0;

For pages of 128 words, each row takes one page. Thus, the preceding code zeros one word in each page, then another word in each page, and so on. If the operating system allocates fewer than 128 frames to the entire program, then its execution will result in 128 × 128 = 16,384 page faults. In contrast, suppose we change the code to

int i, j;
int data[128][128]; 

for (i = 0; i < 128; i++) 
    for (j = 0; j < 128; j++) 
        data[i][j] = 0;

This code zeros all the words on one page before starting the next page, reducing the number of page faults to 128.

So I tried to do this on my OS, just to register the difference in time between the 2 approaches. Let's start with saying that I am working on Windows 10 and the page size I have set for this test is 16 MB, which is also the minimum allowed in my case. Unfortunately, the first piece of code is problematic:

int main(){
    const int PAGE_SIZE = 16; //MB
    const int ROW_SIZE = 127;
    const int COL_SIZE = 1024 / sizeof(int) * 1024 * PAGE_SIZE;
    int table[ROW_SIZE][COL_SIZE];

    printf("test");

    return 0;
}

It compiles(gcc) and it does not throw any exception even at runtime, but it does not print test. I thought it was really weird because it is supposed to print something at least. So I tried to debug it and I have found out that it does not even reach the main, but it is received signal SIGSEGV, Segmentation fault. I am pretty sure that this behaviour is bound to the reduction of the page size, but I can't explain myself why. The error code returned is -1073741571, but I am not an expert on windows and at first I thought that error parsing was similar (my readings kind of confirm that) to Linux's so I took the 16 less significant bits, but the parsed error code is 253 and it does not correspond to any valid error code.

Stargateur
  • 24,473
  • 8
  • 65
  • 91
Marco Luzzara
  • 5,540
  • 3
  • 16
  • 42
  • you asking for a automatic variable of `532 676 608` bytes when `int` is 4 bytes. That a lot to ask – Stargateur Jun 30 '18 at 17:43
  • 1
    Do not *ever* try to allocate huge data structures on the stack. The stack is limited in size; in Windows the default is (or was) 1 MiB. And the default page size in Windows is 4 KiB. Also, operating systems distinguish between soft and hard page faults; there is a great difference between "this program wants a new page" (which most operating systems will do in an instant) and "this program wants to get access to one of its old pages which got swapped out to disk" (which requires actually reading it back from disk). – AlexP Jun 30 '18 at 17:43
  • @Stargateur That was exactly the point. I was trying to split `table` between multiple pages. The mistake I have done is probably on `PAGE_SIZE`. – Marco Luzzara Jun 30 '18 at 17:50
  • @AlexP thank you for the additional information. I am going deep into it. – Marco Luzzara Jun 30 '18 at 17:52
  • 2
    You cannot change the size of a virtual memory page. It is what it is, and it's determined by the actual hardware. Some modern processors and some operating system allow both "normal" 4 KiB pages and "huge pages" intended to be used for memory intensive applications such as database management systems; allocating "huge pages" (if possible at all) requires some extra work, you cannot do it with plain `malloc()`. Also, consider that operating systems will go to great lengths to *avoid* swapping active pages out to disk; to see any measurable effect you must make the system run out of RAM. – AlexP Jun 30 '18 at 18:03

0 Answers0