README
README.decnet
1
2 Here are a few quick points about DECnet support...
3
4 o iproute2 is the tool of choice for configuring the DECnet support for
5 Linux. For many features, it is the only tool which can be used to
6 configure them.
7
8 o No name resolution is available as yet, all addresses must be
9 entered numerically.
10
11 o Remember to set the hardware address of the interface using:
12
13 ip link set ethX address xx:xx:xx:xx:xx:xx
14 (where xx:xx:xx:xx:xx:xx is the MAC address for your DECnet node
15 address)
16
17 if your Ethernet card won't listen to more than one unicast
18 mac address at once. If the Linux DECnet stack doesn't talk to
19 any other DECnet nodes, then check this with tcpdump and if its
20 a problem, change the mac address (but do this _before_ starting
21 any other network protocol on the interface)
22
23 o Whilst you can use ip addr add to add more than one DECnet address to an
24 interface, don't expect addresses which are not the same as the
25 kernels node address to work properly with 2.4 kernels. This should
26 be fine with 2.6 kernels as the routing code has been extensively
27 modified and improved.
28
29 o The DECnet support is currently self contained. It does not depend on
30 the libdnet library.
31
32 Steve Whitehouse <steve (a] chygwyn.com>
33
34
README.devel
1 Iproute2 development is closely tied to Linux kernel networking
2 development. Most new features require a kernel and a utility component.
3
4 Please submit both to the Linux networking mailing list
5 <netdev (a] vger.kernel.org>
6
7 The current source is in the git repository:
8 git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git
9
10 The master branch contains the source corresponding to the current
11 code in the mainline Linux kernel (ie follows Linus). The net-next
12 branch is a temporary branch that tracks the code intended for the
13 next release; it corresponds with networking development branch in
14 the kernel.
15
16
README.distribution
1 I. About the distribution tables
2
3 The table used for "synthesizing" the distribution is essentially a scaled,
4 translated, inverse to the cumulative distribution function.
5
6 Here's how to think about it: Let F() be the cumulative distribution
7 function for a probability distribution X. We'll assume we've scaled
8 things so that X has mean 0 and standard deviation 1, though that's not
9 so important here. Then:
10
11 F(x) = P(X <= x) = \int_{-inf}^x f
12
13 where f is the probability density function.
14
15 F is monotonically increasing, so has an inverse function G, with range
16 0 to 1. Here, G(t) = the x such that P(X <= x) = t. (In general, G may
17 have singularities if X has point masses, i.e., points x such that
18 P(X = x) > 0.)
19
20 Now we create a tabular representation of G as follows: Choose some table
21 size N, and for the ith entry, put in G(i/N). Let's call this table T.
22
23 The claim now is, I can create a (discrete) random variable Y whose
24 distribution has the same approximate "shape" as X, simply by letting
25 Y = T(U), where U is a discrete uniform random variable with range 1 to N.
26 To see this, it's enough to show that Y's cumulative distribution function,
27 (let's call it H), is a discrete approximation to F. But
28
29 H(x) = P(Y <= x)
30 = (# of entries in T <= x) / N -- as Y chosen uniformly from T
31 = i/N, where i is the largest integer such that G(i/N) <= x
32 = i/N, where i is the largest integer such that i/N <= F(x)
33 -- since G and F are inverse functions (and F is
34 increasing)
35 = floor(N*F(x))/N
36
37 as desired.
38
39 II. How to create distribution tables (in theory)
40
41 How can we create this table in practice? In some cases, F may have a
42 simple expression which allows evaluating its inverse directly. The
43 Pareto distribution is one example of this. In other cases, and
44 especially for matching an experimentally observed distribution, it's
45 easiest simply to create a table for F and "invert" it. Here, we give
46 a concrete example, namely how the new "experimental" distribution was
47 created.
48
49 1. Collect enough data points to characterize the distribution. Here, I
50 collected 25,000 "ping" roundtrip times to a "distant" point (time.nist.gov).
51 That's far more data than is really necessary, but it was fairly painless to
52 collect it, so...
53
54 2. Normalize the data so that it has mean 0 and standard deviation 1.
55
56 3. Determine the cumulative distribution. The code I wrote creates a table
57 covering the range -10 to +10, with granularity .00005. Obviously, this
58 is absurdly over-precise, but since it's a one-time only computation, I
59 figured it hardly mattered.
60
61 4. Invert the table: for each table entry F(x) = y, make the y*TABLESIZE
62 (here, 4096) entry be x*TABLEFACTOR (here, 8192). This creates a table
63 for the ("normalized") inverse of size TABLESIZE, covering its domain 0
64 to 1 with granularity 1/TABLESIZE. Note that even with the granularity
65 used in creating the table for F, it's possible not all the entries in
66 the table for G will be filled in. So, make a pass through the
67 inverse's table, filling in any missing entries by linear interpolation.
68
69 III. How to create distribution tables (in practice)
70
71 If you want to do all this yourself, I've provided several tools to help:
72
73 1. maketable does the steps 2-4 above, and then generates the appropriate
74 header file. So if you have your own time distribution, you can generate
75 the header simply by:
76
77 maketable < time.values > header.h
78
79 2. As explained in the other README file, the somewhat sleazy way I have
80 of generating correlated values needs correction. You can generate your
81 own correction tables by compiling makesigtable and makemutable with
82 your header file. Check the Makefile to see how this is done.
83
84 3. Warning: maketable, makesigtable and especially makemutable do
85 enormous amounts of floating point arithmetic. Don't try running
86 these on an old 486. (NIST Net itself will run fine on such a
87 system, since in operation, it just needs to do a few simple integral
88 calculations. But getting there takes some work.)
89
90 4. The tables produced are all normalized for mean 0 and standard
91 deviation 1. How do you know what values to use for real? Here, I've
92 provided a simple "stats" utility. Give it a series of floating point
93 values, and it will return their mean (mu), standard deviation (sigma),
94 and correlation coefficient (rho). You can then plug these values
95 directly into NIST Net.
96
README.iproute2+tc
1 iproute2+tc*
2
3 It's the first release of Linux traffic control engine.
4
5
6 NOTES.
7 * csz scheduler is inoperational at the moment, and probably
8 never will be repaired but replaced with h-pfq scheduler.
9 * To use "fw" classifier you will need ipfwchains patch.
10 * No manual available. Ask me, if you have problems (only try to guess
11 answer yourself at first 8)).
12
13
14 Micro-manual how to start it the first time
15 -------------------------------------------
16
17 A. Attach CBQ to eth1:
18
19 tc qdisc add dev eth1 root handle 1: cbq bandwidth 10Mbit allot 1514 cell 8 \
20 avpkt 1000 mpu 64
21
22 B. Add root class:
23
24 tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate 10Mbit \
25 allot 1514 cell 8 weight 1Mbit prio 8 maxburst 20 avpkt 1000
26
27 C. Add default interactive class:
28
29 tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10Mbit rate 1Mbit \
30 allot 1514 cell 8 weight 100Kbit prio 3 maxburst 20 avpkt 1000 split 1:0 \
31 defmap c0
32
33 D. Add default class:
34
35 tc class add dev eth1 parent 1:1 classid 1:3 cbq bandwidth 10Mbit rate 8Mbit \
36 allot 1514 cell 8 weight 800Kbit prio 7 maxburst 20 avpkt 1000 split 1:0 \
37 defmap 3f
38
39 etc. etc. etc. Well, it is enough to start 8) The rest can be guessed 8)
40 Look also at more elaborated example, ready to start rsvpd,
41 in rsvp/cbqinit.eth1.
42
43
44 Terminology and advices about setting CBQ parameters may be found in Sally Floyd
45 papers.
46
47
48 Pairs X:Y are class handles, X:0 are qdisc handles.
49 weight should be proportional to rate for leaf classes
50 (I choosed it ten times less, but it is not necessary)
51
52 defmap is bitmap of logical priorities served by this class.
53
54 E. Another qdiscs are simpler. F.e. let's join TBF on class 1:2
55
56 tc qdisc add dev eth1 parent 1:2 tbf rate 64Kbit buffer 5Kb/8 limit 10Kb
57
58 F. Look at all that we created:
59
60 tc qdisc ls dev eth1
61 tc class ls dev eth1
62
63 G. Install "route" classifier on root of cbq and map destination from realm
64 1 to class 1:2
65
66 tc filter add dev eth1 parent 1:0 protocol ip prio 100 route to 1 classid 1:2
67
68 H. Assign routes to 10.11.12.0/24 to realm 1
69
70 ip route add 10.11.12.0/24 dev eth1 via whatever realm 1
71
72 etc. The same thing can be made with rules.
73 I still did not test ipchains, but they should work too.
74
75 Setup of rsvp and u32 classifiers is more hairy.
76 If you read RSVP specs, you will understand how rsvp classifier
77 works easily. What's about u32... That's example:
78
79
80
81 #! /bin/sh
82
83 TC=/home/root/tc
84
85 # Setup classifier root on eth1 root (it is cbq)
86 $TC filter add dev eth1 parent 1:0 prio 5 protocol ip u32
87
88 # Create hash table of 256 slots with ID 1:
89 $TC filter add dev eth1 parent 1:0 prio 5 handle 1: u32 divisor 256
90
91 # Add to 6th slot of hash table rule to select tcp/telnet to 193.233.7.75
92 # direct it to class 1:4 and prescribe to fall to best effort,
93 # if traffic violate TBF (32kbit,5K)
94 $TC filter add dev eth1 parent 1:0 prio 5 u32 ht 1:6: \
95 match ip dst 193.233.7.75 \
96 match tcp dst 0x17 0xffff \
97 flowid 1:4 \
98 police rate 32kbit buffer 5kb/8 mpu 64 mtu 1514 index 1
99
100 # Add to 1th slot of hash table rule to select icmp to 193.233.7.75
101 # direct it to class 1:4 and prescribe to fall to best effort,
102 # if traffic violate TBF (10kbit,5K)
103 $TC filter add dev eth1 parent 1:0 prio 5 u32 ht 1:: \
104 sample ip protocol 1 0xff \
105 match ip dst 193.233.7.75 \
106 flowid 1:4 \
107 police rate 10kbit buffer 5kb/8 mpu 64 mtu 1514 index 2
108
109 # Lookup hash table, if it is not fragmented frame
110 # Use protocol as hash key
111 $TC filter add dev eth1 parent 1:0 prio 5 handle ::1 u32 ht 800:: \
112 match ip nofrag \
113 offset mask 0x0F00 shift 6 \
114 hashkey mask 0x00ff0000 at 8 \
115 link 1:
116
117
118 Alexey Kuznetsov
119 kuznet (a] ms2.inr.ac.ru
120
README.lnstat
1 lnstat - linux networking statistics
2 (C) 2004 Harald Welte <laforge (a] gnumonks.org
3 ======================================================================
4
5 This tool is a generalized and more feature-complete replacement for the old
6 'rtstat' program.
7
8 In addition to routing cache statistics, it supports any kind of statistics
9 the linux kernel exports via a file in /proc/net/stat. In a stock 2.6.9
10 kernel, this is
11 per-protocol neighbour cache statistics
12 (ipv4, ipv6, atm, decnet)
13 routing cache statistics
14 (ipv4)
15 connection tracking statistics
16 (ipv4)
17
18 Please note that lnstat will adopt to any additional statistics that might be
19 added to the kernel at some later point
20
21 I personally always like examples more than any reference documentation, so I
22 list the following examples. If somebody wants to do a manpage, feel free
23 to send me a patch :)
24
25 EXAMPLES:
26
27 In order to get a list of supported statistics files, you can run
28
29 lnstat -d
30
31 It will display something like
32
33 /proc/net/stat/arp_cache:
34 1: entries
35 2: allocs
36 3: destroys
37 [...]
38 /proc/net/stat/rt_cache:
39 1: entries
40 2: in_hit
41 3: in_slow_tot
42
43 You can now select the files/keys you are interested by something like
44
45 lnstat -k arp_cache:entries,rt_cache:in_hit,arp_cache:destroys
46
47 arp_cach|rt_cache|arp_cach|
48 entries| in_hit|destroys|
49 6| 6| 0|
50 6| 0| 0|
51 6| 2| 0|
52
53
54 You can specify the interval (e.g. 10 seconds) by:
55
56 lnstat -i 10
57
58 You can specify to only use one particular statistics file:
59
60 lnstat -f ip_conntrack
61
62 You can specify individual field widths
63
64 lnstat -k arp_cache:entries,rt_cache:entries -w 20,8
65
66 You can specify not to print a header at all
67
68 lnstat -s 0
69
70 You can specify to print a header only at start of the program
71
72 lnstat -s 1
73
74 You can specify to print a header at start and every 20 lines:
75
76 lnstat -s 20
77
78 You can specify the number of samples you want to take (e.g. 5):
79
80 lnstat -c 5
81
82