6

I set a lot of breakpoints in lldb for a C language based application I installed on my MacOS. The breakpoints were mostly set in the same function in the application. However, the next day that I went back to the application to continue working on it, and I started setting breakpoints again in the same function, a problem arose in that the break didn't occur inside the application function, but rather in one of the underlying libraries of the application and it keeps doing this over and over again everytime I try to break in the function (i.e. it's stopping in the underlying library) and I'm not able to reach the desired function by stepping (every time I step, it just steps forward in the underlying library).

Update:

The function I am setting the breakpoint in is called from within a signal handler. For example when I send a SIGINT signal, the signal handler calls some functions to cleanup in the application, and I am setting the breakpoint on one of those functions that cleanup. Sometimes, LLDB stops in the function that I set the breakpoint in (with stop reason = breakpoint 1.1) , sometimes it stops in the underlying/included event handling library with stop reason = signal SIGSTOP and, if the latter, if I press "c" (to continue onto the breakpoint in the application hopefully and out of the event handling library), only sometimes does it let me continue onto the desired breakpoint, othertimes it just says "Process 41524 resuming" and I can never get to the desired breakpoint.

Leahcim
  • 40,649
  • 59
  • 195
  • 334
  • Are you using Xcode to set the breakpoints or command line lldb? If Xcode, it does cache breakpoints, but you can disable them in the Breakpoints Navigator. – Jim Ingham Mar 02 '17 at 02:43
  • If this is command line lldb, it does no caching from run to run of lldb itself, though it does keep the breakpoints you have set active and reset them every time you re-run the program you are debugging. – Jim Ingham Mar 02 '17 at 02:44
  • If you can give the output of the "break list" command and the breakpoint number you're unexpectedly hitting, maybe we can see something funny? – Jim Ingham Mar 02 '17 at 02:45
  • I updated the title of the question to address the problem I had once it was figured out that it wasn't a cache related issue. – Leahcim Mar 02 '17 at 18:17
  • @JimIngham The solution I posted in my answer works some of the time but then it's reverted back to the behavior in the OP. Any suggestions as to what the problem might be? it's stopping again in the underlying library. – Leahcim Mar 02 '17 at 19:22

2 Answers2

11

Ah, then I don't think the problem was with breakpoints, but with whether your signal handler was actually getting called.

Most debuggers have some way to control what happens when a signal is received. In lldb this is done through the process handle command. For instance:

(lldb) process handle SIGSTOP
NAME         PASS   STOP   NOTIFY
===========  =====  =====  ======
SIGSTOP      false  true   true 

That means lldb will stop when your process is given a SIGSTOP, and will notify you about the SIGSTOP, but will NOT pass the SIGSTOP on to the program you are debugging (and thus your handler will not get called for SIGSTOP.) process handle with no arguments will give you the list of behaviors for all signals.

We don't pass SIGSTOP by default because it is used by the debugger for its own purposes, and so you might get calls to your handler that didn't come from "real" SIGSTOP's. The same is true, for the same reason, of SIGINT:

(lldb) process handle SIGINT
NAME         PASS   STOP   NOTIFY
===========  =====  =====  ======
SIGINT       false  true   true 

You can easily change this behavior, for instance for SIGINT:

(lldb) process handle SIGINT -p true
NAME         PASS   STOP   NOTIFY
===========  =====  =====  ======
SIGINT       true   true   true 

Then the debugger will pass the SIGINT on to the process, and it will stop in your handler.

Jim Ingham
  • 25,260
  • 2
  • 55
  • 63
  • 1
    ok, thank you. I assume nothing has changed (i.e. it can't be configured before starting a process) since your answer in 2013 to this question http://stackoverflow.com/questions/16989988/disable-signals-at-lldb-initialization I ran `help process handle` and didn't find any more info – Leahcim Mar 07 '17 at 16:37
  • Yes, that hasn't gotten fixed yet. – Jim Ingham Mar 07 '17 at 18:30
0

As mentioned in the troubleshooting guide, adding a target.inline-breakpoint-strategy setting to the .lldbinit file seemed to fix the problem

 "settings set target.inline-breakpoint-strategy always" >> ~/.lldbinit

Update: problem not fixed, see OP, so this is not a good solution (AFAIK)

Leahcim
  • 40,649
  • 59
  • 195
  • 334
  • adding this line to `.lldbinit` worked for a little while but then it reverted back to behavior described in OP – Leahcim Mar 02 '17 at 19:26
  • What do you mean by "stopping at a non-existent breakpoint"? When you stop, is lldb's stop reason "breakpoint x.x" or is is EXC_BREAKPOINT? If it is the latter, then this is not a breakpoint lldb set, but it is some system library using the same trap in an assert. Search StackOverflow for EXC_BREAKPOINT for lots of examples of this happening. – Jim Ingham Mar 03 '17 at 18:22