npcap-internals.html 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304
  1. <html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Npcap internals</title><meta name="generator" content="DocBook XSL Stylesheets V1.79.2"><meta name="description" content="Describes the internal structure and interfaces of Npcap: the NPF driver and Packet.dll"><link rel="home" href="index.html" title="Npcap Reference Guide"><link rel="up" href="index.html" title="Npcap Reference Guide"><link rel="prev" href="npcap-tutorial.html" title="Npcap Development Tutorial"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Npcap internals</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="npcap-tutorial.html">Prev</a> </td><th width="60%" align="center"> </th><td width="20%" align="right"> </td></tr></table><hr></div><div class="sect1"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="npcap-internals"></a>Npcap internals</h2></div><div><div class="abstract"><p class="title"><b>Abstract</b></p>
  2. <p>Describes the internal structure and interfaces of Npcap: the NPF
  3. driver and Packet.dll</p>
  4. </div></div></div></div>
  5. <p>This portion of the manual describes the internal structure and
  6. interfaces of Npcap, starting from the lowest-level module. It is targeted
  7. at people who must extend or modify this software, or to the ones
  8. interested in how it works. Therefore, developers who just want to use
  9. Npcap in their software don't need to read it.</p>
  10. <div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="npcap-structure"></a>Npcap structure</h3></div></div></div>
  11. <p>Npcap is an architecture for packet capture and network analysis for the
  12. Win32 platforms. It includes a kernel-level packet filter, a
  13. low-level dynamic link library (packet.dll), and a high-level and
  14. system-independent library (wpcap.dll).</p>
  15. <p>Why do we use the term <em class="wordasword">architecture</em> rather
  16. than <em class="wordasword">library</em>? Because packet capture is a low
  17. level mechanism that requires a strict interaction with the network
  18. adapter and with the operating system, in particular with its networking
  19. implementation, so a simple library is not sufficient.</p>
  20. <div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="id582422"></a>Main components of Npcap.</h4></div></div></div>
  21. <p>First, a capture system needs to bypass the operating systems's
  22. protocol stack in order to access the raw data transiting on the
  23. network. This requires a portion running inside the kernel of OS,
  24. interacting directly with the network interface drivers. This portion
  25. is very system dependent, and in our solution it is realized as a
  26. device driver, called Netgroup Packet Filter (NPF); This driver offers
  27. basic features like packet capture and injection, as well as more
  28. advanced ones like a programmable filtering system and a monitoring
  29. engine. The filtering system can be used to restrict a capture session
  30. to a subset of the network traffic (e.g. it is possible to capture only
  31. the ftp traffic generated by a particular host); the monitoring engine
  32. provides a powerful but simple to use mechanism to obtain statistics on
  33. the traffic (e.g. it is possible to obtain the network load or the
  34. amount of data exchanged between two hosts).</p>
  35. <p>Second, the capture system must export an interface that user-level
  36. applications will use to take advantage of the features provided by the
  37. kernel driver. Npcap provides two different libraries:
  38. <code class="filename">packet.dll</code> and
  39. <code class="filename">wpcap.dll</code>.</p>
  40. <p> Packet.dll offers a low-level API that can be used to directly
  41. access the functions of the driver, with a programming interface
  42. independent from the Microsoft OS.</p>
  43. <p>Wpcap.dll exports a more powerful set of high level capture
  44. primitives that are compatible with libpcap, the well known Unix
  45. capture library. These functions enable packet capture in a manner that
  46. is independent of the underlying network hardware and operating
  47. system.</p>
  48. </div>
  49. </div>
  50. <div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="npcap-internals-driver"></a>Npcap driver internals</h3></div></div></div>
  51. <p>This section documents the internals of the Netgroup Packet Filter
  52. (NPF), the kernel portion of Npcap. Normal users are probably interested
  53. in how to use Npcap and not in its internal structure. Therefore the
  54. information present in this module is destined mainly to Npcap developers
  55. and maintainers, or to the people interested in how the driver works. In
  56. particular, a good knowledge of OSes, networking and Windows kernel
  57. programming and device drivers development is required to profitably read
  58. this section.</p>
  59. <p>NPF is the Npcap component that does the hard work, processing the
  60. packets that transit on the network and exporting capture, injection and
  61. analysis capabilities to user-level.</p>
  62. <p>The following paragraphs will describe the interaction of NPF with
  63. the OS and its basic structure.</p>
  64. <div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="npcap-internals-driver-ndis"></a>NPF and NDIS</h4></div></div></div>
  65. <p>NDIS (Network Driver Interface Specification) is a standard that
  66. defines the communication between a network adapter (or, better, the
  67. driver that manages it) and the protocol drivers (that implement for
  68. example TCP/IP). Main NDIS purpose is to act as a wrapper that allows
  69. protocol drivers to send and receive packets onto a network (LAN or
  70. WAN) without caring either the particular adapter or the particular
  71. Win32 operating system.</p>
  72. <p>NDIS supports four types of network drivers:</p>
  73. <div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem">
  74. <p><span class="emphasis"><em>Miniport drivers</em></span>. Miniport drivers
  75. directly manage network interface cards, referred to as NICs. The
  76. miniport drivers interface directly to the hardware at their lower
  77. edge and at their upper edge present an interface to allow upper
  78. layers to send packets on the network, to handle interrupts, to
  79. reset the NIC, to halt the NIC and to query and set the operational
  80. characteristics of the driver.</p>
  81. <p>Miniport drivers implement only the hardware-specific
  82. operations necessary to manage a NIC, including sending and
  83. receiving data on the NIC. Operations common to all lowest level
  84. NIC drivers, such as synchronization, is provided by NDIS.
  85. Miniports do not call operating system routines directly; their
  86. interface to the operating system is NDIS.</p>
  87. <p>A miniport does not keep track of bindings. It merely passes
  88. packets up to NDIS and NDIS makes sure that these packets are
  89. passed to the correct protocols.</p>
  90. </li><li class="listitem">
  91. <p><span class="emphasis"><em>Intermediate drivers</em></span>. Intermediate drivers
  92. interface between an upper-level driver such as a protocol driver
  93. and a miniport. To the upper-level driver, an intermediate driver
  94. looks like a miniport. To a miniport, the intermediate driver looks
  95. like a protocol driver. An intermediate protocol driver can layer
  96. on top of another intermediate driver although such layering could
  97. have a negative effect on system performance. A typical reason for
  98. developing an intermediate driver is to perform media translation
  99. between an existing legacy protocol driver and a miniport that
  100. manages a NIC for a new media type unknown to the protocol driver.
  101. For instance, an intermediate driver could translate from LAN
  102. protocol to ATM protocol. An intermediate driver cannot communicate
  103. with user-mode applications, but only with other NDIS drivers.</p>
  104. </li><li class="listitem">
  105. <p><span class="emphasis"><em>Filter drivers</em></span>. Filter drivers can monitor
  106. and modify traffic between protocol drivers and miniport drivers
  107. like an intermediate driver, but are much simpler. They have less
  108. processing overhead than intermediate drivers.</p>
  109. </li><li class="listitem">
  110. <p><span class="emphasis"><em>Transport drivers or protocol drivers</em></span>. A
  111. protocol driver implements a network protocol stack such as IPX/SPX
  112. or TCP/IP, offering its services over one or more network interface
  113. cards. A protocol driver services application-layer clients at its
  114. upper edge and connects to one or more NIC driver(s) or
  115. intermediate NDIS driver(s) at its lower edge.</p>
  116. </li></ol></div>
  117. <p>NPF is implemented as a filter driver. In order to provide complete
  118. access to the raw traffic and allow injection of packets, it is
  119. registered as a modifying filter driver in the compression
  120. <code class="literal">FilterClass</code>.</p>
  121. <p>Notice that the various Windows operating systems have different
  122. versions of NDIS: NPF is NDIS 6.0 compliant, and so requires a Windows
  123. OS that supports NDIS 6.0: Windows Vista or later.</p>
  124. </div>
  125. <div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="npcap-internals-structure"></a>NPF structure basics</h4></div></div></div>
  126. <p>NPF is able to perform a number of different operations: capture,
  127. monitoring, packet injection. The following paragraphs
  128. will describe shortly each of these operations.</p>
  129. <div class="sect4"><div class="titlepage"><div><div><h5 class="title"><a name="npcap-internals-capture"></a>Packet Capture</h5></div></div></div>
  130. <p>The most important operation of NPF is packet capture. During a
  131. capture, the driver sniffs the packets using a network interface and
  132. delivers them intact to the user-level applications.</p>
  133. <p>The capture process relies on two main components:</p>
  134. <div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>A packet filter that decides if an incoming packet
  135. has to be accepted and copied to the listening application. Most
  136. applications using NPF reject far more packets than those
  137. accepted, therefore a versatile and efficient packet filter is
  138. critical for good over-all performance. A packet filter is a
  139. function with boolean output that is applied to a packet. If the
  140. value of the function is true the capture driver copies the
  141. packet to the application; if it is false the packet is
  142. discarded. NPF packet filter is a bit more complex, because it
  143. determines not only if the packet should be kept, but also the
  144. amount of bytes to keep. The filtering system adopted by NPF
  145. derives from the <span class="emphasis"><em>BSD Packet Filter</em></span> (BPF), a
  146. virtual processor able to execute filtering programs expressed in
  147. a pseudo-assembler and created at user level. The application
  148. takes a user-defined filter (e.g. <span class="quote">&#8220;<span class="quote">pick up all UDP
  149. packets</span>&#8221;</span>) and, using wpcap.dll, compiles them into a BPF
  150. program (e.g. <span class="quote">&#8220;<span class="quote">if the packet is IP and the
  151. <code class="literal">protocol type</code> field is equal to 17, then
  152. return true</span>&#8221;</span>). Then, the application uses the
  153. <code class="literal">BIOCSETF</code> IOCTL to inject the filter in the
  154. kernel. At this point, the program is executed for every incoming
  155. packet, and only the conformant packets are accepted. Unlike
  156. traditional solutions, NPF does not
  157. <span class="emphasis"><em>interpret</em></span> the filters, but it
  158. <span class="emphasis"><em>executes</em></span> them. For performance reasons,
  159. before using the filter NPF feeds it to a JIT compiler that
  160. translates it into a native 80x86 function. When a packet is
  161. captured, NPF calls this native function instead of invoking the
  162. filter interpreter, and this makes the process very fast. The
  163. concept behind this optimization is very similar to the one of
  164. Java jitters.</p>
  165. </li><li class="listitem">
  166. <p>A circular buffer to store the packets and avoid loss. A
  167. packet is stored in the buffer with a header that maintains
  168. information like the timestamp and the size of the packet.
  169. Moreover, an alignment padding is inserted between the packets in
  170. order to speed-up the access to their data by the applications.
  171. Groups of packets can be copied with a single operation from the
  172. NPF buffer to the applications. This improves performances
  173. because it minimizes the number of reads. If the buffer is full
  174. when a new packet arrives, the packet is discarded and hence it's
  175. lost. Both kernel and user buffer can be changed at runtime for
  176. maximum versatility: packet.dll and wpcap.dll provide functions
  177. for this purpose.</p>
  178. </li></ul></div>
  179. <p>The size of the user buffer is very important because it determines
  180. the <span class="emphasis"><em>maximum</em></span> amount of data that can be copied from
  181. kernel space to user space within a single system call. On the other
  182. hand, it can be noticed that also the <span class="emphasis"><em>minimum</em></span>
  183. amount of data that can be copied in a single call is extremely
  184. important. In presence of a large value for this variable, the kernel
  185. waits for the arrival of several packets before copying the data to the
  186. user. This guarantees a low number of system calls, i.e. low processor
  187. usage, which is a good setting for applications like sniffers. On the
  188. other side, a small value means that the kernel will copy the packets
  189. as soon as the application is ready to receive them. This is excellent
  190. for real time applications (like, for example, ARP redirectors or
  191. bridges) that need the better responsiveness from the kernel. From
  192. this point of view, NPF has a configurable behavior, that allows users
  193. to choose between best efficiency or best responsiveness (or any
  194. intermediate situation).</p>
  195. <p>The wpcap library includes a couple of system calls that can be
  196. used both to set the timeout after which a read expires and the minimum
  197. amount of data that can be transferred to the application. By default,
  198. the read timeout is 1 second, and the minimum amount of data copied
  199. between the kernel and the application is 16K.</p>
  200. </div>
  201. <div class="sect4"><div class="titlepage"><div><div><h5 class="title"><a name="npcap-internals-injection"></a>Packet injection</h5></div></div></div>
  202. <p>NPF allows to write raw packets to the network. To send data, a
  203. user-level application performs a WriteFile() system call on the NPF
  204. device file. The data is sent to the network as is, without
  205. encapsulating it in any protocol, therefore the application will have
  206. to build the various headers for each packet. The application usually
  207. does not need to generate the FCS because it is calculated by the
  208. network adapter hardware and it is attached automatically at the end of
  209. a packet before sending it to the network.</p>
  210. <p>In normal situations, the sending rate of the packets to the
  211. network is not very high because of the need of a system call for each
  212. packet. For this reason, the possibility to send a single packet more
  213. than once with a single write system call has been added. The
  214. user-level application can set, with an IOCTL call
  215. (<code class="literal">BIOCSWRITEREP</code>), the number of times a single packet
  216. will be repeated: for example, if this value is set to 1000, every raw
  217. packet written by the application on the driver's device file will be
  218. sent 1000 times. This feature can be used to generate high speed
  219. traffic for testing purposes: the overload of context switches is no
  220. longer present, so performance is remarkably better.</p>
  221. </div>
  222. <div class="sect4"><div class="titlepage"><div><div><h5 class="title"><a name="npcap-internals-monitoring"></a>Network monitoring</h5></div></div></div>
  223. <p>Npcap offers a kernel-level programmable monitoring module, able to
  224. calculate simple statistics on the network traffic. Statistics can be
  225. gathered without the need to copy the packets to the application, that
  226. simply receives and displays the results obtained from the monitoring
  227. engine. This allows to avoid great part of the capture overhead in
  228. terms of memory and CPU clocks.</p>
  229. <p>The monitoring engine is made of a <span class="emphasis"><em>classifier</em></span>
  230. followed by a <span class="emphasis"><em>counter</em></span>. The packets are classified
  231. using the filtering engine of NPF, that provides a configurable way to
  232. select a subset of the traffic. The data that pass the filter go to the
  233. counter, that keeps some variables like the number of packets and the
  234. amount of bytes accepted by the filter and updates them with the data
  235. of the incoming packets. These variables are passed to the user-level
  236. application at regular intervals whose period can be configured by the
  237. user. No buffers are allocated at kernel and user level.</p>
  238. </div>
  239. </div>
  240. </div>
  241. <div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="npcap-internals-references"></a>Further reading</h3></div></div></div>
  242. <p>The structure of NPF and its filtering engine derive directly from
  243. the one of the BSD Packet Filter (BPF), so if you are interested the
  244. subject you can read the following papers:</p>
  245. <div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem"><p>S. McCanne and V. Jacobson, <a class="ulink" href="ftp://ftp.ee.lbl.gov/papers/bpf-usenix93.ps.Z" target="_top">The BSD Packet
  246. Filter: A New Architecture for User-level Packet Capture</a>.
  247. Proceedings of the 1993 Winter USENIX Technical Conference (San
  248. Diego, CA, Jan. 1993), USENIX.</p>
  249. </li><li class="listitem"><p>A. Begel, S. McCanne, S.L.Graham, BPF+: <a class="ulink" href="http://www.acm.org/pubs/articles/proceedings/comm/316188/p123-begel/p123-begel.pdf" target="_top">Exploiting
  250. Global Data-flow Optimization in a Generalized Packet Filter
  251. Architecture</a>, Proceedings of ACM SIGCOMM '99, pages 123-134,
  252. Conference on Applications, technologies, architectures, and
  253. protocols for computer communications, August 30 - September 3, 1999,
  254. Cambridge, USA</p>
  255. </li></ul></div>
  256. </div>
  257. </div><div class="navfooter"><hr><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="npcap-tutorial.html">Prev</a> </td><td width="20%" align="center"> </td><td width="40%" align="right"> </td></tr><tr><td width="40%" align="left" valign="top">Npcap Development Tutorial </td><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td><td width="40%" align="right" valign="top"> </td></tr></table></div></body></html>