-1

As far as I know, the near pointers (or not far pointers) in C/C++ have address value which is very small than the actual address in RAM. So, if I keep on incrementing a pointer (say int type or any object type pointer) then at particular value it will be roll over. Now, my question is that: Is after rolling back the value pointed by it valid or not (assuming, I have a large size of data in the memory)?

I know this is a strange question to ask but I have a situation where I am continuously allocating and deallocating memory. I am finding that at particular point the binary crashes due to invalid address value like 0x20, 0x45 or 0x10101 etc.

I was wondering that the issue is due to roll over of the pointer value and since the address is getting rollover due to pointer therefore it is showing invalid address and crashing when it is being accessed.

I hope the situation I am referring is similar to the question is being asked. Even if they are different, I would like to know answers to both. I tried searching on "continuous incrementing pointers" but didn't find my answer.

EDIT: This is a new code compiled with G++ 4.1.2 20080704 (Red Hat 4.1.2-48) on Red Hat linux.

Actually the code is very large to share. But I can brief it in words: There are 3 threads:

  1. First thread: It creates allocates Alert class object and pushes it into the queue.
  2. Second thread: It reads Alert from the queue, process it.
  3. Third thread: It release the memory allocated to Alert objects after 20-30 minutes of processing.

I have already verified that the 3rd thread is not deallocating it before processed by 2nd thread.

But since the Alerts are generated on regular basis (i.e. around Thousands in a second) so I was suspecting the issue mentioned in the main question. Points to note in my implementation: I am using linux pipe queue to push it from one thread to other. For that I am pushing only address value of the object from sender side, and ensured to not delete the object there. Is this a possible way of corruption? Following is the code of this particular task:

Alert* l_alert = new Alert(ADD_ACTION,
                l_vehicleType,
                l_vehicleNo,
                l_projPolyline,
                l_speed,
                l_slotId);
    m_ResultHandler->SendToWorker(&l_alert);

Implementation of queue functions:

    S32 SendToWorker(queueDataType *p_instPtr)
    {
            S32 ret_val=SUCCESS;

            QueueObj.Lock();
            ret_val = QueueObj.Signal();
            QueueObj.push(*p_instPtr);
            QueueObj.UnLock();

            return ret_val;
    }

    S32 GetFromReceiver(queueDataType *p_instPtr)
    {
            QueueObj.Lock();
            while(QueueObj.size() == 0)
                    QueueObj.Wait();

            *p_instPtr = QueueObj.front();
            QueueObj.pop();
            QueueObj.UnLock();

            return SUCCESS;
    }

Receiver End:

m_alertQueue->GetFromReceiver(&l_alert)
Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
Vivek Agrawal
  • 113
  • 11
  • 1
    I don't think pointer overflow behavior is defined in any useful way. Also, pointers past an allocated object aren't valid. see also http://stackoverflow.com/q/5037125/20270 – Hasturkun Sep 09 '13 at 10:36
  • I don't know what you mean by "near" or "far" pointers, as I thought all that nonsense died with MS-DOS, but you will need to provide some code. You cannot just use whatever address you like and expect it to work as not all virtual memory will mapped to a physical page. – trojanfoe Sep 09 '13 at 10:38
  • Are you compling some very old code, or cross-compiling for an embedded system? The distinction you mention between near and far pointers hasn't been relevant since the introduction of the large memory model with the 80386. – RobH Sep 09 '13 at 10:39
  • If you continuously increment a pointer without doing anything else to it, you will eventually have it pointing to an invalid location (unless you consider the entire address space to be valid, which is highly unlikely). – mah Sep 09 '13 at 10:43
  • 1
    What makes you even think that it might *not* be invalid? Of course it will be. The question doesn’t even make sense to me. – Konrad Rudolph Sep 09 '13 at 10:44
  • @RobH: This braindamage is still to be seen in really small 8 and 16-bit embedded uCs with internal SRAM that can be indexed with a 16-bit register. `near` pointers therefore saved several instructions needed for 32-bit address arithmetic. – marko Sep 09 '13 at 10:45
  • With small numbers like that after "continuously allocating and deallocating memory", it feels more like allocation failed and gave you a null pointer (which is typically represented by address zero). How are you allocating memory, and how are you checking that the allocation succeeded? And is this C, C++, or some ghastly hybrid of the two? – Mike Seymour Sep 09 '13 at 10:52
  • @MikeSeymour: it is purely C++ code. The allocated pointer is never been NULL therefore I never get informed if it is properly allocated or not. I have also been monitoring the RAM usage in the system, it under utilized so I think there is no reason that it is not being allocated. – Vivek Agrawal Sep 09 '13 at 11:16
  • @VivekAgrawal: OK, so `new` shouldn't give you a null pointer unless you're building with weird compiler settings. But it certainly does sound like your're getting a null pointer from somewhere; I guess you'll just have to look at the backtrace in your debugger when it crashes. (Certainly, there's no concept of "near" pointers in a 32 or 64-bit platform, and if you do have a runaway pointer it will crash at some high-valued invalid address long before wrapping to zero, so I doubt the problem is anything like that). – Mike Seymour Sep 09 '13 at 11:29
  • @MikeSeymour: Everytime after the binary crash, I ran the core-dump with gdb and backtraced it to find that the cause of the crash. It was always due to accessing the Alert pointer (which was pointing to addresses as mentioned above). Another point I noticed, the crash timing is not consistent, it may come in 2 days or can come on the same day. But I think discussing this issue in detail in this question might be irrelevant now, because the issue I was suspecting (i.e. roll over of address) is ruled out now, as far as modern systems are concerned. Thanks everyone for the help. – Vivek Agrawal Sep 09 '13 at 12:04

2 Answers2

2

What is the OS? Are you using virtual memory? The C standard says that a pointer is allowed to point to one address past the end of an array (but not dereferenced).

Pointing anywhere else is undefined behaviour.

Community
  • 1
  • 1
Nobilis
  • 7,310
  • 1
  • 33
  • 67
1

The concept "near" and "far" pointers is a concept that mainly exists in compilers for x86 in 16-bit mode, where a "near" pointer is a 16-bit offset to a default segment of some sort, and a "far" pointer has a segment and offset value in the pointer itself. In 32- and 64-bit OS's, pointers are (generally) just an offset within a flat memory model (all segments are based at address zero).

A pointer can, according to the C standard point "to single object or an array of elements, and one past that". Anything else is undefined behaviour by the standard. One reason for this statement is to support segmented memory, where pointers may not be easy to compare between different segments (in particular not if the segments don't have a direct base-address, such as in OS/2 1.x, which used 16-bit protected mode, so code doesn't have easy access to the base-address of a segment. Since segments CAN overlap, it's not possible to tell if base address A + offset A is the same or different from base address B + offset B).

What actually happens if you have a pointer that doesn't fulfil this criteria is as stated "undefined". In an x86 environment, the real answer is that "it won't crash, and nothing bad will happen if you read the memory", but of course, if you try to write to memory that isn't "yours", then something bad could happen. Exactly what depends on what memory you are overwriting, and what the use of that memory is. It's impossible to say exactly what happens without knowing exactly what the memory is used for, and what value is written to it.

In a modern 32- or 64-bit OS, accessing memory that is "invalid" will definitely cause the program to crash, because modern OS's have memory protection that prevent "wild memory accesses".

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227
  • I think, this is the case of wild memory accesses, but I have never tried to access the memory which is not allocated to me. – Vivek Agrawal Sep 09 '13 at 11:20