Vincent Tech Blog

Friday, October 13, 2017

Import existing source code to GitHub

https://stackoverflow.com/questions/4658606/import-existing-source-code-to-github

down voteaccepted
If you've got local source code you want to add to a new remote new git repository without 'cloning' the remote first, do the following (I often do this - you create your remote empty repository in bitbucket/github, then push up your source)
  1. Create the remote repository, and get the URL such as git@github.com:/youruser/somename.git or https://github.com/youruser/somename.git
    If your local GIT repo is already set up, skips steps 2 and 3

  2. Locally, at the root directory of your source, git init
    2a. If you initialize the repo with a .gitignore and a README.md you should do a git pull {url from step 1} to ensure you don't commit files to source that you want to ignore ;)
  3. Locally, add and commit what you want in your initial repo (for everything, git add . then git commit -m 'initial commit comment')

  4. to attach your remote repo with the name 'origin' (like cloning would do)
    git remote add origin [URL From Step 1]
  5. Execute git pull origin master to pull the remote branch so that they are in sync.
  6. to push up your master branch (change master to something else for a different branch):
    git push origin master

Friday, June 30, 2017

KVM vhost performance tuning to enhance ADC VE throughput

As global Telecom companies start adopting ADC - Application delivery controller (Load balancer)  in their OpenStack environment, it becomes important to achieve high throughput for ADC VE instances, but unlike ADC hardware appliances, ADC VE runs in customer commodity hardware server with either Redhat/Ubuntu as Host OS and KVM as hypvervisor, so it becomes important to know the underlying technologies to tune the hypvervisor environment for best performance, here are some hands on experience tuning KVM vhost to achieve ideal throughput.

Lab equipment

Dell Poweredge R710 (16 cores) + Intel 82599 10G NIC + 72G RAM
Dell Poweredge R210 (8 cores) + Intel 82599 10G NIC + 32G RAM

Network setup:

 /external vlan|<------------->|eth1 <--->iperf client \
| Dell R710(ADC VE)           Dell R210               |
 \Internal vlan|<------------->|eth2 <--->iperf server /


Note since I only have two physical  servers, Dell R710 as host for ADC VE, I have to 
use Dell R210 as both iperf server and iperf client, so I used Linux network namespace to 
isolate the IP and route spaces so the iperf client packet can egress out the physical NIC eth1,
forwarded by BIGIP VE, back into physical NIC eth2 to be processed by iperf server, here is 
simple bash script  to setup linux network namespace:



#!/usr/bin/env bash

set -x

NS1="ns1"
NS2="ns2"
DEV1="em1"
DEV2="em2"
IP1="10.1.72.62"
IP2="10.2.72.62"
NET1="10.1.0.0/16"
NET2="10.2.0.0/16"
GW1="10.1.72.1"
GW2="10.2.72.1"

if [[ $EUID -ne 0 ]]; then
    echo "You must be root to run this script"
    exit 1
fi

# Remove namespace if it exists.
ip netns del $NS1 &amp;&gt;/dev/null
ip netns del $NS2 &amp;&gt;/dev/null

# Create namespace
ip netns add $NS1
ip netns add $NS2

#add physical interface to namespace
ip link set dev $DEV1  netns $NS1
ip link set dev $DEV2  netns $NS2




# Setup namespace IP .
ip netns exec $NS1 ip addr add $IP1/16 dev $DEV1
ip netns exec $NS1 ip link set $DEV1 up
ip netns exec $NS1 ip link set lo up
ip netns exec $NS1 ip route add $NET2 via $GW1 dev $DEV1

ip netns exec $NS2 ip addr add $IP2/16 dev $DEV2
ip netns exec $NS2 ip link set $DEV2 up
ip netns exec $NS2 ip link set lo up
ip netns exec $NS2 ip route add $NET1 via $GW2 dev $DEV2

# Enable IP-forwarding.
echo 1 &gt; /proc/sys/net/ipv4/ip_forward

# Get into namespace
#ip netns exec ${NS} /bin/bash --rcfile &lt;(echo "PS1=\"${NS}&gt; \"")

On ADC VE I setup a simple forwarding virtual server to simply forward the packet, this is 
default throughput output without any performance tuning:

ns1&gt; /home/dpdk/iperf -c 10.2.72.62 -l 1024 -P 64
...............
................
[ 25]  0.0-10.2 sec  46.0 MBytes  37.9 Mbits/sec
[SUM]  0.0-10.2 sec  3.22 GBytes  2.72 Gbits/sec &lt;======= 2.72Gbits

here is the top output of vhost dataplane kernel thread for the ADC VE look like while passing traffic:

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                 P
23329 libvirt+  20   0 35.366g 0.030t  23396 S 262.5 43.4 153:31.10 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime m+  1
23332 root      20   0       0      0      0 R  17.9  0.0   1:35.98 [vhost-23329]                                                                                                           1
23336 root      20   0       0      0      0 R  17.9  0.0   1:18.20 [vhost-23329]


as you can see there are two vhost kernel thread showing up with 17.9% CPU usage, which indicates
vhost is not fully scheduled to pass data traffic for the guest machine. I have defined 4 tx/rx queues pair
for the macvtap on the physical 10G interface and two macvtap assigned to the ADC VE for external and internal vlan
, ideally, there should be 8 vhost kernel threads showing up from top that is fully scheduled to pass traffic

for example the interface xml dump  as below:


    

<interface type='bridge'>
      <mac address='52:54:00:55:47:05'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='52:54:00:f9:98:e9'/>
      <source dev='enp4s0f0' mode='vepa'/>
      <target dev='macvtap2'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </interface>
    <interface type='direct'>
      <mac address='52:54:00:4b:06:c4'/>
      <source dev='enp4s0f1' mode='vepa'/>
      <target dev='macvtap3'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <alias name='net2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </interface>

vCPU pin assigned

root@Dell710:~# virsh vcpupin bigip-virtio
VCPU: CPU Affinity
----------------------------------
   0: 0
   1: 2
   2: 4
   3: 6
   4: 8
   5: 10
   6: 12
   7: 14
   8: 2
   9: 4

 vhost cpu pin:

~#  virsh emulatorpin bigip-virtio
emulator: CPU Affinity
----------------------------------
       *: 0,2,4,6,8,10,12,14

NUMA node:

# lscpu --parse=node,core,cpu
# The following is the parsable format, which can be fed to other
# programs. Each different item in every column has an unique ID
# starting from zero.
# Node,Core,CPU
0,0,0
1,1,1
0,2,2
1,3,3
0,4,4
1,5,5
0,6,6
1,7,7
0,0,8
1,1,9
0,2,10
1,3,11
0,4,12
1,5,13
0,6,14
1,7,15

so the odd CPU is on NUMA node 1, even CPU is on NUMA node 0, guest is pined to NUMA node 0 and vhost is pined to NUMA node 0 too
which should be good. why the lower throughput.

lets try assign the vhost to NUMA node 1 CPU:

# virsh emulatorpin bigip-virtio 1,3,5,7,9,11,13,15


#  virsh emulatorpin bigip-virtio
emulator: CPU Affinity
----------------------------------
       *: 1,3,5,7,9,11,13,15
now runs the test again:
[SUM]  0.0-10.1 sec  10.1 GBytes  8.58 Gbits/sec <=========8.58G, big difference!!!


  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                                  P
23344 libvirt+  20   0 35.350g 0.030t  23396 R 99.9 43.4  15:40.95 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+  6
23341 libvirt+  20   0 35.350g 0.030t  23396 R 99.9 43.4  17:39.58 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+  0
23346 libvirt+  20   0 35.350g 0.030t  23396 R 99.9 43.4  15:23.76 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+ 10
23347 libvirt+  20   0 35.350g 0.030t  23396 R 99.9 43.4  15:29.99 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+ 12
23345 libvirt+  20   0 35.350g 0.030t  23396 R 99.7 43.4  15:29.29 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+  8
23348 libvirt+  20   0 35.350g 0.030t  23396 R 99.7 43.4  15:42.95 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+ 14
23342 libvirt+  20   0 35.350g 0.030t  23396 R 98.7 43.4  14:58.66 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+  2
23343 libvirt+  20   0 35.350g 0.030t  23396 R 96.0 43.4  14:58.54 qemu-system-x86_64 -enable-kvm -name bigip-virtio -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 31357 -realtime ml+  4
23332 root      20   0       0      0      0 R 40.2  0.0   1:12.12 [vhost-23329]                                                                                                           15
23333 root      20   0       0      0      0 R 40.2  0.0   1:05.58 [vhost-23329]                                                                                                           13
23335 root      20   0       0      0      0 R 40.2  0.0   1:04.98 [vhost-23329]                                                                                                            3
23334 root      20   0       0      0      0 R 39.2  0.0   1:04.52 [vhost-23329]                                                                                                            1
23337 root      20   0       0      0      0 R 32.2  0.0   0:47.66 [vhost-23329]                                                                                                           11
23339 root      20   0       0      0      0 R 31.6  0.0   0:50.47 [vhost-23329]                                                                                                           15
23336 root      20   0       0      0      0 S 31.2  0.0   0:56.08 [vhost-23329]                                                                                                            5
23338 root      20   0       0      0      0 R 30.2  0.0   0:49.52 [vhost-23329] 


this tells that something in host kernel is using NUMA node 0 CPUs that 8 vhost thread unable to get scheduled more
enough to process the data traffic. my theory is that physical NIC IRQ is spread to even cores on NUMA node 0 and softirq runs 
high on even cores, the vhost kernel thread didn't get enough time to run on even core, assigning the vhost to idle cores in 
NUMA node 1 so vhost get enough CPU cycles to process the data packet
      
 




Thursday, March 9, 2017

Intrusion Prevention System - Snort

A typical process flow:


 221         /* Not a completely ideal place for this since any entries added on the
 222          * PacketCallback -> ProcessPacket -> Preprocess trail will get
 223          * obliterated - right now there isn't anything adding entries there.
 224          * Really need it here for stream5 clean exit, since all of the
 225          * flushed, reassembled packets are going to be injected directly into
 226          * this function and there may be enough that the obfuscation entry
 227          * table will overflow if we don't reset it.  Putting it here does
 228          * have the advantage of fewer entries per logging cycle */




SnortMain -> PacketLoop -> PacketCallback -> ProcessPacket -> Preprocess ->Detect ->fpEvalPacket ->fpEvalHeaderTcp

Wednesday, May 18, 2016

Patch to make mTCP running in VM environment to send packet between VM and physical server across switches

when running mTCP in VM environment like in VMware ESXi, KVM...the source MAC is zero see https://github.com/eunyoung14/mtcp/issues/51, this could result in packet being dropped, following patches avoid adding DPDK PMD driver for mTCP to save the porting effort.




 diff --git a/mtcp/src/io_module.c b/mtcp/src/io_module.c
 index ad3e01d..83e0893 100644
 --- a/mtcp/src/io_module.c
 +++ b/mtcp/src/io_module.c
 @@ -63,6 +63,22 @@ GetNumQueues()
     return queue_cnt;
  }
  /*----------------------------------------------------------------------------*/
 +
 +static int GetPortIndex(char *dev_name)
 +{
 +    char *p = dev_name;
 +    long val = -1;
 +    while (*p) { // While there are more characters to process...
 +        if (isdigit(*p)) { // Upon finding a digit, ...
 +            val = strtol(p, &p, 10); // Read a number, ...
 +        } else {
 +            p++;
 +        }
 +    }
 +    return (int)val;
 +}
 +
 +
  int
  SetInterfaceInfo(char* dev_name_list)
  {
 @@ -243,9 +259,10 @@ SetInterfaceInfo(char* dev_name_list)
                     CONFIG.eths[eidx].ip_addr = *(uint32_t *)&sin;
                 }
 -                if (ioctl(sock, SIOCGIFHWADDR, &ifr) == 0 ) {
 +                if(strstr(iter_if->ifa_name, "dpdk") != NULL) {
 +                    ret = GetPortIndex(iter_if->ifa_name);
                     for (j = 0; j < ETH_ALEN; j ++) {
 -                        CONFIG.eths[eidx].haddr[j] = ifr.ifr_addr.sa_data[j];
 +                        CONFIG.eths[eidx].haddr[j] = ports_eth_addr[ret].addr_bytes[j];
                     }
                 }

DPDK pktgen to generate SYN flood

hack patch to make pktgen to do syn flood:



diff --git a/app/cmd-functions.c b/app/cmd-functions.c
 index b2fda7c..c348e73 100644
 --- a/app/cmd-functions.c
 +++ b/app/cmd-functions.c
 @@ -303,6 +303,8 @@ const char *help_info[] = {
     "pkt.size max <portlist> value   - Set pkt size maximum address",
     "pkt.size inc <portlist> value   - Set pkt size increment address",
     "range <portlist> <state>      - Enable or Disable the given portlist for sending a range of packets",
 +    "range.proto <portlist> [tcp|udp|icmp]",
 +    "                  - Set ip proto for sending a range of packets",
     "",
     "<<PageBreak>>",
     "    Flags: P---------------- - Promiscuous mode enabled",
 diff --git a/app/pktgen-tcp.c b/app/pktgen-tcp.c
 index 3c8a853..9d12a88 100644
 --- a/app/pktgen-tcp.c
 +++ b/app/pktgen-tcp.c
 @@ -69,6 +69,26 @@
  #include "pktgen-tcp.h"
 +uint64_t xor_seed[ 2 ];
 +
 +static inline uint64_t
 +xor_next(void) {
 +    uint64_t s1 = xor_seed[ 0 ];
 +    const uint64_t s0 = xor_seed[ 1 ];
 +
 +    xor_seed[ 0 ] = s0;
 +    s1 ^= s1 << 23;                 /* a */
 +    return ( xor_seed[ 1 ] = ( s1 ^ s0 ^ ( s1 >> 17 ) ^ ( s0 >> 26 ) ) ) +
 +        s0;               /* b, c */
 +}
 +
 +static __inline__ uint32_t
 +pktgen_default_rnd_func(void)
 +{
 +    return xor_next();
 +}
 +
 +
  /**************************************************************************//**
  *
  * pktgen_tcp_hdr_ctor - TCP header constructor routine.
 @@ -100,10 +120,10 @@ pktgen_tcp_hdr_ctor(pkt_seq_t *pkt, tcpip_t *tip, int type __rte_unused)
     tip->tcp.sport   = htons(pkt->sport);
     tip->tcp.dport   = htons(pkt->dport);
 -    tip->tcp.seq    = htonl(DEFAULT_PKT_NUMBER);
 -    tip->tcp.ack    = htonl(DEFAULT_ACK_NUMBER);
 +    tip->tcp.seq    = htonl(pktgen_default_rnd_func());
 +    tip->tcp.ack    = 0;
     tip->tcp.offset   = ((sizeof(tcpHdr_t) / sizeof(uint32_t)) << 4);   /* Offset in words */
 -    tip->tcp.flags   = ACK_FLAG;                     /* ACK */
 +    tip->tcp.flags   = SYN_FLAG;                     /* ACK */
     tip->tcp.window   = htons(DEFAULT_WND_SIZE);
     tip->tcp.urgent   = 0;


root@pktgen-template:/home/admin/pktgen-dpdk/dpdk/examples/pktgen-dpdk# ./app/app/x86_64-native-linuxapp-gcc/pktgen -c ff   -- -P -m "[0:0-7].0 "
 Copyright (c) <2010-2016>, Intel Corporation. All rights reserved.
   Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<

Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
>>> Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf cache 512

=== port to lcore mapping table (# lcores 8) ===
   lcore:     0     1     2     3     4     5     6     7
port   0:  D: T  0: 1  0: 1  0: 1  0: 1  0: 1  0: 1  0: 1 =  1: 8
Total   :  1: 1  0: 1  0: 1  0: 1  0: 1  0: 1  0: 1  0: 1
    Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 1 ports, MBUF Size 1920, MBUF Cache Size 512
Lcore:
    0, RX-TX
                RX( 1): ( 0: 0)
                TX( 1): ( 0: 0)
    1, TX-Only
                TX( 1): ( 0: 1)
    2, TX-Only
                TX( 1): ( 0: 2)
    3, TX-Only
                TX( 1): ( 0: 3)
    4, TX-Only
                TX( 1): ( 0: 4)
    5, TX-Only
                TX( 1): ( 0: 5)
    6, TX-Only
                TX( 1): ( 0: 6)
    7, TX-Only
                TX( 1): ( 0: 7)

Port :
    0, nb_lcores  8, private 0x8f0690, lcores:  0  1  2  3  4  5  6  7



** Dev Info (rte_vmxnet3_pmd:0) **
   max_vfs        :   0 min_rx_bufsize    :1646 max_rx_pktlen : 16384 max_rx_queues         :  16 max_tx_queues:   8
   max_mac_addrs  :   1 max_hash_mac_addrs:   0 max_vmdq_pools:     0
   rx_offload_capa:  13 tx_offload_capa   :  45 reta_size     :     0 flow_type_rss_offloads:0000000000000514
   vmdq_queue_base:   0 vmdq_queue_num    :   0 vmdq_pool_base:     0
** RX Conf **
   pthreash       :   0 hthresh          :   0 wthresh        :     0
   Free Thresh    :   0 Drop Enable      :   0 Deferred Start :     0
** TX Conf **
   pthreash       :   0 hthresh          :   0 wthresh        :     0
   Free Thresh    :   0 RS Thresh        :   0 Deferred Start :     0 TXQ Flags:00000200

Initialize Port 0 -- TxQ 8, RxQ 1,  Src MAC 00:50:56:86:10:76
Pktgen > load tcp.txt
Pktgen> start 0
Pktgen> stop 0
root@pktgen-template:/home/admin/pktgen-dpdk/dpdk/examples/pktgen-dpdk# cat tcp.txt
#
# Pktgen - Ver: 2.9.17 (DPDK 16.04.0-rc2)
# Copyright (c) <2010-2016>, Intel Corporation. All rights reserved., Powered by Intel® DPDK

# Command line arguments: (DPDK args are defaults)
# ./app/app/x86_64-native-linuxapp-gcc/pktgen -c ff -n 3 -m 512 --proc-type primary -- -P -m [0:1-7].0

#######################################################################
# Pktgen Configuration script information:
#   GUI socket is Not Enabled
#   Flags 00040004
#   Number of ports: 1
#   Number ports per page: 4
#   Number descriptors: RX 512 TX: 512
#   Promiscuous mode is Enabled


#######################################################################
# Global configuration:
geometry 132x44
mac_from_arp disable

######################### Port  0 ##################################
#
# Port:  0, Burst: 32, Rate:100%, Flags:c0000010, TX Count:Forever
#           SeqCnt:0, Prime:1 VLAN ID:0001, Link:
#
# Set up the primary port information:
set 0 count 0
set 0 size 64
set 0 rate 100
set 0 burst 32
set 0 sport 1234
set 0 dport 5678
set 0 prime 1
type ipv4 0
proto tcp 0
set ip dst 0 10.1.72.17
#set ip dst 0 10.1.72.8
set ip src 0 10.1.72.154/24
set mac 0 00:23:E9:63:5B:83
#set mac 0 00:50:56:86:84:90
vlanid 0 1

pattern 0 zero
user.pattern 0 0123456789abcdef

latency 0 disable
mpls 0 disable
mpls_entry 0 0
qinq 0 disable
qinqids 0 0 0
gre 0 disable
gre_eth 0 disable
gre_key 0 0
#
# Port flag values:
icmp.echo 0 disable
pcap 0 disable
range 0 enable
process 0 disable
capture 0 disable
rxtap 0 disable
txtap 0 disable
vlan 0 disable

#
# Range packet information:
src.mac start 0 00:50:56:86:10:76
src.mac min 0 00:00:00:00:00:00
src.mac max 0 00:00:00:00:00:00
src.mac inc 0 00:00:00:00:00:00
dst.mac start 0 00:23:E9:63:5B:83
#dst.mac start 0 00:50:56:86:84:90
dst.mac min 0 00:00:00:00:00:00
dst.mac max 0 00:00:00:00:00:00
dst.mac inc 0 00:00:00:00:00:00

src.ip start 0 10.1.72.154
src.ip min 0 10.1.72.154
src.ip max 0 10.1.72.254
src.ip inc 0 0.0.0.1

dst.ip start 0 10.1.72.17
dst.ip min 0 10.1.72.17
dst.ip max 0 10.1.72.17
dst.ip inc 0 0.0.0.1

#dst.ip start 0 10.1.72.8
#dst.ip min 0 10.1.72.8
#dst.ip max 0 10.1.72.8
#dst.ip inc 0 0.0.0.1

src.port start 0 1025
src.port min 0 1025
src.port max 0 65512
src.port inc 0 1

dst.port start 0 80
dst.port min 0 0
dst.port max 0 0
dst.port inc 0 0

vlan.id start 0 1
vlan.id min 0 1
vlan.id max 0 4095
vlan.id inc 0 0

pkt.size start 0 64
pkt.size min 0 64
pkt.size max 0 1518
pkt.size inc 0 0

#
# Set up the sequence data for the port.
set 0 seqCnt 0

################################ Done #################################

Wednesday, March 23, 2016

Patch draft to make mTCP+DPDK work in vlan tagged network

Here is a patch idea draft to make mTCP + DPDK work in vlan tagged environment,  the next thing is to figure out how to run mTCP + DPDK in vlan tagged VMware ESXi environment, which would be great to run mTCP + DPDK in VMware ESXi VM and easy to clone the VM for everybody need it

 diff --git a/mtcp/src/dpdk_module.c b/mtcp/src/dpdk_module.c  

 index 33d349e..3c08e25 100644
 --- a/mtcp/src/dpdk_module.c
 +++ b/mtcp/src/dpdk_module.c
 @@ -66,7 +66,7 @@ static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
  /* packet memory pools for storing packet bufs */
  static struct rte_mempool *pktmbuf_pool[MAX_CPUS] = {NULL};
 -//#define DEBUG                1
 +#define DEBUG             1
  #ifdef DEBUG
  /* ethernet addresses of ports */
  static struct ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 @@ -79,7 +79,8 @@ static struct rte_eth_conf port_conf = {
         .split_hdr_size =    0,
         .header_split  =    0, /**< Header Split disabled */
         .hw_ip_checksum =    1, /**< IP checksum offload enabled */
 -        .hw_vlan_filter =    0, /**< VLAN filtering disabled */
 +        .hw_vlan_filter =    1, /**< VLAN filtering disabled */
 +        .hw_vlan_strip =    1, /**< VLAN strip enabled */
         .jumbo_frame  =    0, /**< Jumbo Frame Support disabled */
         .hw_strip_crc  =    1, /**< CRC stripped by hardware */
     },
 @@ -127,6 +128,7 @@ static const struct rte_eth_txconf tx_conf = {
     .txq_flags =          0x0,
  };
 +
  struct mbuf_table {
     unsigned len; /* length of queued packets */
     struct rte_mbuf *m_table[MAX_PKT_BURST];
 @@ -266,6 +268,8 @@ dpdk_send_pkts(struct mtcp_thread_context *ctxt, int nif)
                       ctxt->cpu, i, nif);
                 exit(EXIT_FAILURE);
             }
 +            dpc->wmbufs[nif].m_table[i]->ol_flags = PKT_TX_VLAN_PKT;
 +            dpc->wmbufs[nif].m_table[i]->vlan_tci = 4094;
         }
         /* reset the len of mbufs var after flushing of packets */
         dpc->wmbufs[nif].len = 0;
 @@ -534,6 +538,12 @@ dpdk_load_module(void)
             if (ret < 0)
                 rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n",
                      ret, (unsigned) portid);
 +
 +            ret = rte_eth_dev_vlan_filter(portid, 4094, 1);
 +
 +            if (ret < 0)
 +                rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n",
 +                    ret, (unsigned) portid);
             /* init one RX queue per CPU */
             fflush(stdout);

Friday, March 18, 2016

Patch to make lighttpd run in multiple core properly with mtcp with the configuration

diff --git a/apps/lighttpd-1.4.32/src/server.c b/apps/lighttpd-1.4.32/src/server.c
index 7c76fd7..f0dde58 100644
--- a/apps/lighttpd-1.4.32/src/server.c
+++ b/apps/lighttpd-1.4.32/src/server.c
@@ -1213,7 +1213,8 @@ int
 main(int argc, char **argv) {
 #ifdef MULTI_THREADED
        server **srv_states = NULL;
-       char *conf_file = NULL;
+       //char *conf_file = NULL;
+       char *conf_file = "/etc/mtcp/config/m-lighttpd.conf";
 #ifdef USE_MTCP
        struct mtcp_conf mcfg;
 #endif
@@ -1594,7 +1595,7 @@ main(int argc, char **argv) {
        mcfg.num_cores = cpus;
        mtcp_setconf(&mcfg);
        /* initialize the mtcp context */
-       if (mtcp_init("mtcp.conf")) {
+       if (mtcp_init("/etc/mtcp/config/lighttpd-mtcp.conf")) {
                fprintf(stderr, "Failed to initialize mtcp\n");
                goto clean_up;
        }

diff --git a/mtcp/src/config.c b/mtcp/src/config.c
index c4faea5..b4e24d0 100644
--- a/mtcp/src/config.c
+++ b/mtcp/src/config.c
@@ -23,8 +23,8 @@
 #define MAX_OPTLINE_LEN 1024
 #define ALL_STRING "all"

-static const char *route_file = "config/route.conf";
-static const char *arp_file = "config/arp.conf";
+static const char *route_file = "/etc/mtcp/config/route.conf";
+static const char *arp_file = "/etc/mtcp/config/arp.conf";


the configuration directory looks like:

root@pktgen:/home/pktgen/mtcp# ls -l /etc/mtcp/config/
total 48
-rw-r--r-- 1 root root   530 Mar  4 14:18 arp.conf
-rw-r--r-- 1 root root  1360 Nov 13 10:34 brute-shake.conf
drwxr-xr-x 2 root root  4096 Mar  4 14:43 conf.d
-rw-r--r-- 1 root root  1370 Nov 13 10:32 epwget.conf
-rw-r--r-- 1 root root  1237 Mar  4 14:15 lighttpd-mtcp.conf
-rw-r--r-- 1 root root 11857 Mar  4 14:40 m-lighttpd.conf
-rw-r--r-- 1 root root  3235 Mar  4 14:42 modules.conf
-rw-r--r-- 1 root root   646 Nov 12 20:18 mtcp.conf
-rw-r--r-- 1 root root   352 Mar  4 14:19 route.conf
-rw-r--r-- 1 root root  1366 Nov 13 10:38 synflood.conf


top output:


top - 14:14:15 up 18 days, 35 min,  4 users,  load average: 7.98, 5.51, 2.53
Threads: 304 total,   9 running, 295 sleeping,   0 stopped,   0 zombie

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                                                                                  P
15707 root      20   0 14.071g 0.010t   9680 R 99.9 14.9   5:44.92 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        1
15730 root      20   0 14.071g 0.010t   9680 R 99.9 14.9   5:44.93 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        0
15708 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:44.95 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        2
15709 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:45.08 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        3
15710 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:44.99 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        4
15711 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:44.94 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        5
15712 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:44.89 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                                                                                                                        6
15713 root      20   0 14.071g 0.010t   9680 R 99.7 14.9   5:44.96 lighttpd -n 8 -f /etc/mtcp/config/m-lighttpd.conf                

Followers