Home | History | Annotate | only in /external/valgrind/main/coregrind/m_gdbserver
Up to higher level directory
NameDateSize
32bit-core-valgrind-s1.xml10-Jul-20122.8K
32bit-core-valgrind-s2.xml10-Jul-20122.8K
32bit-core.xml10-Jul-20122.7K
32bit-linux-valgrind-s1.xml10-Jul-2012442
32bit-linux-valgrind-s2.xml10-Jul-2012443
32bit-linux.xml10-Jul-2012428
32bit-sse-valgrind-s1.xml10-Jul-20122K
32bit-sse-valgrind-s2.xml10-Jul-20122K
32bit-sse.xml10-Jul-20122K
64bit-core-valgrind-s1.xml10-Jul-20123.1K
64bit-core-valgrind-s2.xml10-Jul-20123.1K
64bit-core.xml10-Jul-20123K
64bit-linux-valgrind-s1.xml10-Jul-2012443
64bit-linux-valgrind-s2.xml10-Jul-2012443
64bit-linux.xml10-Jul-2012428
64bit-sse-valgrind-s1.xml10-Jul-20122.5K
64bit-sse-valgrind-s2.xml10-Jul-20122.5K
64bit-sse.xml10-Jul-20122.4K
amd64-coresse-valgrind.xml10-Jul-2012683
amd64-linux-valgrind.xml10-Jul-2012877
arm-core-valgrind-s1.xml10-Jul-20121.2K
arm-core-valgrind-s2.xml10-Jul-20121.2K
arm-core.xml10-Jul-20121.1K
arm-vfpv3-valgrind-s1.xml10-Jul-20122.1K
arm-vfpv3-valgrind-s2.xml10-Jul-20122.1K
arm-vfpv3.xml10-Jul-20122K
arm-with-vfpv3-valgrind.xml10-Jul-2012607
arm-with-vfpv3.xml10-Jul-2012413
gdb/10-Jul-2012
i386-coresse-valgrind.xml10-Jul-2012667
i386-linux-valgrind.xml10-Jul-2012878
inferiors.c10-Jul-20125.5K
m_gdbserver.c10-Jul-201244.1K
power-altivec-valgrind-s1.xml10-Jul-20122.5K
power-altivec-valgrind-s2.xml10-Jul-20122.5K
power-altivec.xml10-Jul-20122.4K
power-core.xml10-Jul-20122.1K
power-fpu-valgrind-s1.xml10-Jul-20122.1K
power-fpu-valgrind-s2.xml10-Jul-20122.1K
power-fpu.xml10-Jul-20122.1K
power-linux-valgrind-s1.xml10-Jul-2012485
power-linux-valgrind-s2.xml10-Jul-2012485
power-linux.xml10-Jul-2012469
power64-core-valgrind-s1.xml10-Jul-20122.2K
power64-core-valgrind-s2.xml10-Jul-20122.2K
power64-core.xml10-Jul-20122.1K
power64-linux-valgrind-s1.xml10-Jul-2012485
power64-linux-valgrind-s2.xml10-Jul-2012485
power64-linux.xml10-Jul-2012469
powerpc-altivec32l-valgrind.xml10-Jul-20121.1K
powerpc-altivec32l.xml10-Jul-2012734
powerpc-altivec64l-valgrind.xml10-Jul-20121.1K
powerpc-altivec64l.xml10-Jul-2012741
README_DEVELOPERS10-Jul-201220.1K
regcache.c10-Jul-20126.9K
regcache.h10-Jul-20122.5K
regdef.h10-Jul-20121.6K
remote-utils.c10-Jul-201230.5K
server.c10-Jul-201233.6K
server.h10-Jul-201214.3K
signals.c10-Jul-201221K
target.c10-Jul-20123.6K
target.h10-Jul-20125.4K
utils.c10-Jul-20122.5K
valgrind-low-amd64.c10-Jul-201211K
valgrind-low-arm.c10-Jul-201210.2K
valgrind-low-ppc32.c10-Jul-201213.9K
valgrind-low-ppc64.c10-Jul-201213.8K
valgrind-low-s390x.c10-Jul-20127.7K
valgrind-low-x86.c10-Jul-20129.3K
valgrind-low.c10-Jul-201219K
valgrind_low.h10-Jul-20123.4K
version.c10-Jul-201292

README_DEVELOPERS

      1 This file contains various notes/ideas/history/... related
      2 to gdbserver in valgrind.
      3 
      4 How to use Valgrind gdbserver ?
      5 -------------------------------
      6 This is described in the Valgrind user manual.
      7 Before reading the below, you better read the user manual first.
      8 
      9 What is gdbserver ?
     10 -------------------
     11 gdb debugger typically is used to debug a process running
     12 on the same machine : gdb uses system calls (such as ptrace) 
     13 to fetch data from the process being debugged
     14 or to change data in the process 
     15 or interrupt the process 
     16 or ...
     17 
     18 gdb can also debug processes running in a different computer
     19 (e.g. it can debug a process running on a small real time
     20 board).
     21 
     22 gdb does this by sending some commands (e.g. using tcp/ip) to a piece
     23 of code running on the remote computer. This piece of code (called a
     24 gdb stub in small boards, or gdbserver when the remote computer runs
     25 an OS such as GNU/linux) will provide a set of commands allowing gdb
     26 to remotely debug the process.  Examples of commands are: "get the
     27 registers", "get the list of running threads", "read xxx bytes at
     28 address yyyyyyyy", etc.  The definition of all these commands and the
     29 associated replies is the gdb remote serial protocol, which is
     30 documented in Appendix D of gdb user manual.
     31 
     32 The standard gdb distribution has a standalone gdbserver (a small
     33 executable) which implements this protocol and the needed system calls
     34 to allow gdb to remotely debug process running on a linux or MacOS or
     35 ...
     36 
     37 Activation of gdbserver code inside valgrind
     38 --------------------------------------------
     39 The gdbserver code (from gdb 6.6, GPL2+) has been modified so as to
     40 link it with valgrind and allow the valgrind guest process to be
     41 debugged by a gdb speaking to this gdbserver embedded in valgrind.
     42 The ptrace system calls inside gdbserver have been replaced by reading
     43 the state of the guest.
     44 
     45 The gdbserver functionality is activated with valgrind command line
     46 options. If gdbserver is not enabled, then the impact on valgrind
     47 runtime is minimal: basically it just checks at startup the command
     48 line option to see that there is nothing to do for what concerns gdb
     49 server: there is a "if gdbserver is active" check in the translate
     50 function of translate.c and an "if" in the valgrind scheduler.
     51 If the valgrind gdbserver is activated (--vgdb=yes), the impact
     52 is minimal (from time to time, the valgrind scheduler checks a counter
     53 in memory). Option --vgdb-poll=yyyyy controls how often the scheduler
     54 will do a (somewhat) more heavy check to see if gdbserver needs to
     55 stop execution of the guest to allow debugging.
     56 If valgrind gdbserver is activated with --vgdb=full, then
     57 each instruction is instrumented with an additional call to a dirty
     58 helper. 
     59 
     60 How does gdbserver code interacts with valgrind ?
     61 -------------------------------------------------
     62 When an error is reported, the gdbserver code is called.  It reads
     63 commands from gdb using read system call on a FIFO (e.g. a command
     64 such as "get the registers").  It executes the command (e.g. fetches
     65 the registers from the guest state) and writes the reply (e.g. a
     66 packet containing the register data).  When gdb instructs gdbserver to
     67 "continue", the control is returned to valgrind, which then continues
     68 to execute guest code.  The FIFOs used to communication between
     69 valgrind and gdb are created at startup if gdbserver is activated
     70 according to the --vgdb=no/yes/full command line option.
     71 
     72 How are signals "handled" ?
     73 ---------------------------
     74 When a signal is to be given to the guest, valgrind core first calls
     75 gdbserver (if a gdb is currently connected to valgrind, otherwise the
     76 signal is delivered immediately). If gdb instructs to give the signal
     77 to the process, the signal is delivered to the guest.  Otherwise, the
     78 signal is ignored (not given to the guest). The user can
     79 with gdb further decide to pass (or not pass) the signal.
     80 Note that some (fatal) signals cannot be ignored.
     81 
     82 How are "break/step/stepi/next/..." implemented ?
     83 -------------------------------------------------
     84 When a break is put by gdb on an instruction, a command is sent to the
     85 gdbserver in valgrind. This causes the basic block of this instruction
     86 to be discarded and then re-instrumented so as to insert calls to a
     87 dirty helper which calls the gdb server code.  When a block is
     88 instrumented for gdbserver, all the "jump targets" of this block are
     89 invalidated, so as to allow step/stepi/next to properly work: these
     90 blocks will themselves automatically be re-instrumented for gdbserver
     91 if they are jumped to.
     92 The valgrind gdbserver remembers which blocks have been instrumented
     93 due to this "lazy 'jump targets' debugging instrumentation" so as to
     94 discard these "debugging translation" when gdb instructs to continue
     95 the execution normally.
     96 The blocks in which an explicit break has been put by the user
     97 are kept instrumented for gdbserver.
     98 (but note that by default, gdb removes all breaks when the
     99 process is stopped, and re-inserts all breaks when the process
    100 is continued). This behaviour can be changed using the gdb
    101 command 'set breakpoint always-inserted'.
    102 
    103 How are watchpoints implemented ?
    104 ---------------------------------
    105 Watchpoints implies support from the tool to detect that
    106 a location is read and/or written. Currently, only memcheck
    107 supports this : when a watchpoint is placed, memcheck changes
    108 the addressability bits of the watched memory zone to be unacessible.
    109 Before an access, memcheck then detects an error, but sees this error
    110 is due to a watchpoint and gives the control back to gdb.
    111 Stopping on the exact instruction for a write watchpoint implies
    112 to use --vgdb=full. This is because the error is detected by memcheck
    113 before modifying the value. gdb checks that the value has not changed
    114 and so "does not believe" the information that the write watchpoint
    115 was triggered, and continues the execution. At the next watchpoint
    116 occurence, gdb sees the value has changed. But the watchpoints are all
    117 reported "off by one". To avoid this, Valgrind gdbserver must
    118 terminate the current instruction before reporting the write watchpoint.
    119 Terminating precisely the current instruction implies to have
    120 instrumented all the instructions of the block for gdbserver even
    121 if there is no break in this block. This is ensured by --vgdb=full.
    122 See m_gdbserver.c Bool VG_(is_watched) where watchpoint handling
    123 is implemented.
    124 
    125 How is the Valgrind gdbserver receiving commands/packets from gdb ?
    126 -------------------------------------------------------------------
    127 The embedded gdbserver reads gdb commands on a named pipe having
    128 (by default) the name   /tmp/vgdb-pipe-from-vgdb-to-PID-by-USER-on-HOST
    129 where PID, USER, and HOST will be replaced by the actual pid, the user id,
    130 and the host name, respectively.
    131 The embedded gdbserver will reply to gdb commands on a named pipe
    132 /tmp/vgdb-pipe-to-vgdb-from-PID-by-USER-on-HOST
    133 
    134 gdb does not speak directly with gdbserver in valgrind: a relay application
    135 called vgdb is needed between gdb and the valgrind-ified process.
    136 gdb writes commands on the stdin of vgdb. vgdb reads these
    137 commands and writes them on FIFO /tmp/vgdb-pipe-from-vgdb-to-PID-by-USER-on-HOST.
    138 vgdb reads replies on FIFO /tmp/vgdb-pipe-to-vgdb-from-PID-by-USER-on-HOST
    139 and writes them on its stdout. 
    140 
    141 Note: The solution of named pipes was preferred to tcp ip connections as
    142 it allows a discovery of which valgrind-ified processes are ready to accept
    143 command by looking at files starting with the /tmp/vgdb-pipe- prefix
    144 (changeable by a command line option).
    145 Also, the usual unix protections are protecting 
    146 the valgrind process against other users sending commands.
    147 The relay process also takes into account the wake up of the valgrind
    148 process in case all threads are blocked in a system call.
    149 The relay process can also be used in a shell to send commands
    150 without a gdb (this allows to have a standard mechanism to control
    151 valgrind tools from the command line, rather than specialized mechanism
    152 e.g. in callgrind).
    153 
    154 How is gdbserver activated if all Valgrind threads are blocked in a syscall ?
    155 -----------------------------------------------------------------------------
    156 vgdb relays characters from gdb to valgrind. The scheduler will from
    157 time to time check if gdbserver has to handle incoming characters.
    158 (the check is efficient i.e. most of the time consists in checking
    159 a counter in (shared) memory).
    160 
    161 However, it might be that all the threads in the valgrind process are
    162 blocked in a system call. In such a case, no polling will be done by
    163 the valgrind scheduler (as no activity takes place).  By default, vgdb
    164 will check after 100ms if the characters it has written have been read
    165 by valgrind. If not, vgdb will force the invocation of the gdbserver
    166 code inside the valgrind process.
    167 
    168 This forced invocation is implemented using the ptrace system call:
    169 using ptrace, vgdb will cause the valgrind process to call the
    170 gdbserver code.
    171 
    172 This wake up is *not* done using signals as this would imply to
    173 implement a syscall restart logic in valgrind for all system
    174 calls. When using ptrace as above, the linux kernel is responsible to
    175 restart the system call.
    176 
    177 This wakeup is also *not* implemented by having a "system thread"
    178 started by valgrind as this would transform all non-threaded programs
    179 in threaded programs when running under valgrind. Also, such a 'system
    180 thread' for gdbserver was tried by Greg Parker in the early MacOS
    181 port, and was unreliable.  
    182 
    183 So, the ptrace based solution was chosen instead.
    184 
    185 There used to be some bugs in the kernel when using ptrace on 
    186 a process blocked in a system call : the symptom is that the system
    187 call fails with an unknown errno 512. This typically happens
    188 with a vgdb in 64bits ptrace-ing a 32 bits process.
    189 A bypass for old kernels has been integrated in vgdb.c (sign extend
    190 register rax).
    191 
    192 At least on a fedora core 12 (kernel 2.6.32), syscall restart of read
    193 and select are working ok and red-hat 5.3 (an old kernel), everything
    194 works properly.
    195 
    196 Need to investigate if darwin and/or AIX can similarly do syscall
    197 restart with ptrace.
    198 
    199 The vgdb argument --max-invoke-ms=xxx allows to control the nr of
    200 milli-seconds after which vgdb will force the invocation of gdbserver
    201 code.  If xxx is 0, this disables the forced invocation.
    202 Also, disabling this ptrace mechanism is necessary in case you are
    203 debugging the valgrind code at the same time as debugging the guest
    204 process using gdbserver.
    205 
    206 Do not kill -9 vgdb while it has interrupted the valgrind process,
    207 otherwise the valgrind process will very probably stay stopped or die.
    208 
    209 
    210 Implementation is based on the gdbserver code from gdb 6.6
    211 ----------------------------------------------------------
    212 The gdbserver implementation is derived from the gdbserver included
    213 in the gdb distribution.
    214 The files originating from gdb are : inferiors.c, regcache.[ch],
    215 regdef.h, remote-utils.c, server.[ch], signals.c, target.[ch], utils.c,
    216 version.c.
    217 valgrind-low-* are inspired from gdb files.
    218 
    219 This code had to be changed to integrate properly within valgrind
    220 (e.g. no libc usage).  Some of these changes have been ensured by
    221 using the preprocessor to replace calls by valgrind equivalent,
    222 e.g. #define memcpy(...) VG_(memcpy) (...).
    223 
    224 Some "control flow" changes are due to the fact that gdbserver inside
    225 valgrind must return the control to valgrind when the 'debugged'
    226 process has to run, while in a classical gdbserver usage, the
    227 gdbserver process waits for a debugged process to stop on a break or
    228 similar.  This has implied to have some variables to remember the
    229 state of gdbserver before returning to valgrind (search for
    230 resume_packet_needed in server.c) and "goto" the place where gdbserver
    231 expects a stopped process to return control to gdbserver.
    232 
    233 How does a tool need to be changed to be "debuggable" ?
    234 -------------------------------------------------------
    235 There is no need to modify a tool to have it "debuggable" via
    236 gdbserver : e.g. reports of errors, break etc will work "out of the
    237 box".  If an interactive usage of tool client requests or similar is
    238 desired for a tool, then simple code can be written for that via a
    239 specific client request VG_USERREQ__GDB_MONITOR_COMMAND code. The tool
    240 function "handle_client_request" must then parse the string received
    241 in argument and call the expected valgrind or tool code.  See
    242 e.g. massif ms_handle_client_request as an example.
    243 
    244 
    245 Automatic regression tests:
    246 ---------------------------
    247 Automatic Valgrind gdbserver tests are in the directory
    248 $(top_srcdir)/gdbserver_tests.
    249 Read $(top_srcdir)/gdbserver_tests/README_DEVELOPPERS for more
    250 info about testing.
    251 
    252 How to integrate support for a new architecture xxx?
    253 ----------------------------------------------------
    254 Let's imagine a new architecture hal9000 has to be supported.
    255 
    256 Mandatory:
    257 The main thing to do is to make a file valgrind-low-hal9000.c.
    258 Start from an existing file (e.g. valgrind-low-x86.c).
    259 The data structures 'struct reg regs'
    260 and 'const char *expedite_regs' are build from files
    261 in the gdb sources, e.g. for an new arch hal9000
    262    cd gdb/regformats
    263    ./regdat.sh reg-hal9000.dat hal9000
    264 
    265 From the generated file hal9000, you copy/paste in
    266 valgrind-low-hal9000.c the two needed data structures and change their
    267 name to 'regs' and 'expedite_regs'
    268 
    269 Then adapt the set of functions needed to initialize the structure
    270 'static struct valgrind_target_ops low_target'.
    271 
    272 Optional but heavily recommended:
    273 To have a proper wake up of a Valgrind process with all threads
    274 blocked in a system call, some architecture specific code
    275 has to be done in vgdb.c : search for PTRACEINVOKER processor symbol
    276 to see what has to be completed.
    277 
    278 For Linux based platforms, all the ptrace calls should be ok.
    279 The only thing needed is the code needed to "push a dummy call" on the stack,
    280 i.e. assign the relevant registers in the struct user_regs_struct, and push
    281 values on the stack according to the ABI.
    282 
    283 For other platforms (i.e. Macos), more work is needed as the ptrace calls
    284 on Macos are either different and/or incomplete (and so, 'Mach' specific
    285 things are needed e.g. to attach to threads etc).
    286 A courageous Mac aficionado is welcome on this aspect.
    287 
    288 Optional:
    289 To let gdb see the Valgrind shadow registers, xml description
    290 files have to be provided + valgrind-low-hal9000.c has
    291 to give the top xml file.
    292 Start from the xml files found in the gdb distribution directory
    293 gdb/features. You need to duplicate and modify these files to provide
    294 shadow1 and shadow2 register sets description.
    295 
    296 Modify coregrind/Makefile.am:
    297     add valgrind-low-hal9000.c
    298     If you have target xml description, also add them in pkglib_DATA 
    299 
    300 
    301 A not handled comment given by Julian at FOSDEM.
    302 ------------------------------------------------
    303 * the check for vgdb-poll in scheduler.c could/should be moved to another place:
    304     instead of having it in run_thread_for_a_while
    305     the vgdb poll check could be in VG_(scheduler).
    306   (not clear to me why one is better than the other ???)
    307 
    308 TODO and/or additional nice things to have
    309 ------------------------------------------
    310 * many options can be changed on-line without problems.
    311   => would be nice to have a v.option command that would evaluate
    312   its arguments like the  startup options of m_main.c and tool clo processing.
    313 
    314 * have a memcheck monitor command
    315   who_points_at <address> | <loss_record_nr>
    316     that would describe the addresses where a pointer is found
    317     to address (or address leaked at loss_record_nr>)
    318   This would allow to interactively searching who is "keeping" a piece
    319   of memory.
    320 
    321 * some GDBTD in the code 
    322 
    323 (GDBTD = GDB To Do = something still to look at and/or a question)
    324 
    325 * All architectures and platforms are done.
    326   But there are still some "GDBTD" to convert between gdb registers
    327   and VEX registers :
    328   e.g. some registers in x86 or amd64 that I could not
    329   translate to VEX registers. Someone with a good knowledge
    330   of these architectures might complete this 
    331   (see the GDBTD in valgrind-low-*.c)
    332 
    333 * "hardware" watchpoint (read/write/access watchpoints) are implemented 
    334   but can't persuade gdb to insert a hw watchpoint of what valgrind
    335   supports (i.e. of whatever length).
    336   The reason why gdb does not accept a hardware watch of let's say
    337   10 bytes is:
    338 default_region_ok_for_hw_watchpoint (addr=134520360, len=10) at target.c:2738
    339 2738	  return (len <= gdbarch_ptr_bit (target_gdbarch) / TARGET_CHAR_BIT);
    340 #0  default_region_ok_for_hw_watchpoint (addr=134520360, len=10)
    341     at target.c:2738
    342 2738	  return (len <= gdbarch_ptr_bit (target_gdbarch) / TARGET_CHAR_BIT);
    343 #1  0x08132e65 in can_use_hardware_watchpoint (v=0x85a8ef0)
    344     at breakpoint.c:8300
    345 8300		  if (!target_region_ok_for_hw_watchpoint (vaddr, len))
    346 #2  0x0813bd17 in watch_command_1 (arg=0x84169f0 "", accessflag=2, 
    347     from_tty=<value optimized out>) at breakpoint.c:8140
    348   A small patch in gdb remote.c allowed to control the remote target watchpoint
    349   length limit. This patch is to be submitted.
    350 
    351 * Currently, at least on recent linux kernel, vgdb can properly wake
    352   up a valgrind process which is blocked in system calls. Maybe we
    353   need to see till which kernel version the ptrace + syscall restart
    354   is broken, and put the default value of --max-invoke-ms to 0 in this
    355   case.
    356 
    357 * more client requests can be programmed in various tools.  Currently,
    358   there are only a few standard valgrind or memcheck client requests
    359   implemented.
    360   v.suppression [generate|add|delete] might be an interesting command: 
    361      generate would output a suppression, add/delete would add a suppression
    362      in memory for the last (or selected?) error.
    363   v.break on fn calls/entry/exit + commands associated to it 
    364     (such as search leaks)?
    365 
    366 
    367 
    368 * currently jump(s) and inferior call(s) are somewhat dangerous
    369   when called from a block not yet instrumented : instead
    370   of continuing till the next Imark, where there will be a
    371   debugger call that can properly jump at an instruction boundary,
    372   the jump/call will quit the "middle" of an instruction.
    373   We could detect if the current block is instrumented by a trick
    374   like this:
    375      /* Each time helperc_CallDebugger is called, we will store
    376         the address from which is it called and the nr of bbs_done
    377         when called. This allows to detect that gdbserver is called
    378         from a block which is instrumented. */
    379      static HWord CallDebugger_addr;
    380      static ULong CallDebugger_bbs_done;
    381 
    382      Bool VG_(gdbserver_current_IP_instrumented) (ThreadId tid)
    383      {
    384         if (VG_(get_IP) (tid) != CallDebugger_addr
    385             || CallDebugger_bbs_done != VG_(bbs_done)())
    386            return False;
    387         return True;
    388      }
    389 
    390   Alternatively, we ensure we can re-instrument the current
    391   block for gdbserver while executing it.
    392   Something like:
    393   keep current block till the end of the current instruction, then
    394   go back to scheduler.
    395   Unsure if and how this is do-able.
    396 
    397 
    398 * ensure that all non static symbols of gdbserver files are #define
    399   xxxxx VG_(xxxxx) ???? Is this really needed ? I have tried to put in
    400   a test program variables and functions with the same name as valgrind
    401   stuff, and everything seems to be ok.
    402   I see that all exported symbols in valgrind have a unique prefix
    403   created with VG_ or MC_ or ...
    404   This is not done for the "gdb gdbserver code", where I have kept
    405   the original names. Is this a problem ? I could not create
    406   a "symbol" collision between the user symbol and the valgrind
    407   core gdbserver symbol.
    408 
    409 * currently, gdbserver can only stop/continue the whole process. It
    410   might be interesting to have a fine-grained thread control (vCont
    411   packet) maybe for tools such as helgrind, drd.  This would allow the
    412   user to stop/resume specific threads.  Also, maybe this would solve
    413   the following problem: wait for a breakpoint to be encountered,
    414   switch thread, next. This sometimes causes an internal error in gdb,
    415   probably because gdb believes the current thread will be continued ?
    416 
    417 * would be nice to have some more tests.
    418 
    419 * better valgrind target support in gdb (see comments of Tom Tromey).
    420 
    421 
    422 -------- description of how gdb invokes a function in the inferior
    423 to call a function in the inferior (below is for x86):
    424 gdb writes ESP and EBP to have some more stack space
    425 push a return address equal to  0x8048390 <_start>
    426 puts a break                at  0x8048390
    427 put address of the function to call (e.g. hello_world in EIP (0x8048444))
    428 continue
    429 break encountered at 0x8048391 (90 after decrement)
    430   => report stop to gdb
    431   => gdb restores esp/ebp/eip to what it was (eg. 0x804848C)
    432   => gdb "s" => causes the EIP to go to the new EIP (i.e. 0x804848C)
    433      gdbserver tells "resuming from 0x804848c"
    434                      "stop pc is 0x8048491" => informed gdb of this
    435 
    436