Wednesday, December 9, 2015

Concurrent 10 million connection and Performance Test


Develop an opensource based packet generator for BIGIP load/performance evaluation/trouble-shooting using technology below

DPDK  is an user space drivers and libraries for fast packet
processing, it can generates 10Mpps, 10Mcps

mTCP A Highly Scalable User-level TCP Stack for Multicore Systems.

MoonGen  to generate raw packet like SYN/RST/ACK/UDP/ICMP flooding


Background:


The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution



Problem:


Simplified packet processing in Linux:




Real packet processing in Linux:Linux network data flow




  System calls
  Context switching on blocking I/O
  Data Copying from kernel to user space
  Interrupt handing in kernel

Expense of sendto :

  sendto -  system call:  96ns
  sosend_dgram - lock sock_buff, alloc mbuf, copy in: 137ns
  udp_output - UDP header setup: 57ns
  ip_output - route lookup, ip header setup: 198ns
  ether_output - MAC lookup, MAC header setup: 162ns
  ixgbe_xmit - device programing: 220ns
Total: 950ns

Solution:


Packet processing with DPDK




  Processor affinity (separate cores)
  Huge pages( no swap, TLB)
  UIO (no copying from kernel)
  Polling (no interrupts overhead)
  Lockless synchronization(avoid waiting)
  Batch packets handling
  SSE, NUMA awareness UIO for example:

Kernel space (UIO framework) <------>/dev/uioX<------>userspace epoll/mmap<-------->App 

Problem:

http://www.ndsl.kaist.edu/~kyoungsoo/papers/mtcp.pdf
Limitaions of the Kernel's TCP stack
  Lack of  connection locality
  Shared file descriptor space
  Inefficient per-packet processing
  System call overhead

Solution:

  Batching in packet I/O, TCP processing, user applications ( reduce system call overhead)
  Connection locality on multicore systems - handling same connection on same core, avoid cache
pollution (solve connection locality)
  No descriptor sharing between mTCP thread


mTCP: A Highly Scalable User-level TCP Stack for Multicore
Systems  https://github.com/eunyoung14/mtcp



clone of mTCP in ES codeshare http://git.es.f5net.com/index.cgi/codeshare/tree/vli/mtcp
clone addition:
  Change apachebench configure script to compile with dpdk support
  Ported SSL BIO onto mTCP to enable apachebench to perform SSL test
  Add SSL clienthello stress test based on epwget and ssl-dos
  Add command line option in epwget and apachebenach to enable source address pool to congest
servers
  Increase mTCP SYN BACKLOG to increase concurrent connection
  Changed DPDK .config to compile DPDK as combined shared library
  Tuned send/receive buffer size in epwget.conf to achieve ~7 million concurrent connection
on Dell Poweredge R710 II 72G MEM, 16 core, Intel NIC 82599ES

mTCP installation

https://github.com/eunyoung14/mtcp has detail installation
- DPDK VERSION -
----------------
1. Set up Intel's DPDK driver. Please use our version of DPDK.    We have only changed the lib/igb_uio/ submodule. The best
   method to compile DPDK package is to use DPDK's tools/setup.sh
   script. Please compile your package based on your own hardware
   configuration. We tested the mTCP stack on Intel Xeon E5-2690
   (x86_64) machine with Intel 82599 Ethernet adapters (10G). We
   used the following steps in the setup.sh script for our setup:
 
     - Press [10] to compile the package
           - Press [13] to install the driver
           - Press [17] to setup 1024 2MB hugepages
           - Press [19] to register the Ethernet ports
           - Press [31] to quit the tool

  - check that DPDK package creates a new directory of compiled
  libraries. For x86_64 machines, the new subdirectory should be
  *dpdk-2.1.0/x86_64-native-linuxapp-gcc*

  - only those devices will work with DPDK drivers that are listed
  on this page: http://dpdk.org/doc/nics. Please make sure that your
  NIC is compatible before moving on to the next step.

2. Next bring the dpdk-registered interfaces up. Please use the
   setup_iface_single_process.sh script file present in dpdk-2.1.0/tools/
   directory for this purpose. Please change lines 49-51 to change the IP
   address. Under default settings, run the script as:
        # ./setup_iface_single_process.sh 3

   This sets the IP address of your interfaces as 10.0.x.3.

3. Create soft links for include/ and lib/ directories inside
   empty dpdk/ directory:
        # cd dpdk/
     # ln -s /x86_64-native-linuxapp-
gcc/lib lib
     # ln -s /x86_64-native-linuxapp-
gcc/include include

4. Setup mtcp library:
     # ./configure --with-dpdk-lib=$/dpdk 
       ## And not dpdk-2.1.0!
       ## e.g. ./configure --with-dpdk-lib=`echo $PWD`/dpdk
     # cd mtcp/src
        # make
  - check libmtcp.a in mtcp/lib
  - check header files in mtcp/include

5. make in util/:
        # make

6. make in apps/example:
        # make
  - check example binary files

7. Check the configurations
  - epserver.conf for server-side configuration
  - epwget.conf for client-side configuration
  - you may write your own configuration file for your application   - please see README.config for more details
    -- for the latest version, dyanmic ARP learning is *DISABLED*

8. Run the applications!

mTCP App configuration

############### mtcp configuration file ###############

# The underlying I/O module you want to use. Please
# enable only one out of the two.
io = dpdk
num_cores = 8
num_ip = 64
# Number of memory channels per processor socket (dpdk-only)
num_mem_ch = 4
#------ DPDK ports -------#
#port = dpdk0 dpdk1
port = dpdk0
#port = dpdk0:0
#port = dpdk0:1

# Enable multi-process support (under development)
#multiprocess = 0 master
#multiprocess = 1

# Receive buffer size of sockets
rcvbuf = 512
# Send buffer size of sockets sndbuf = 512
# Maximum concurrency per core
max_concurrency = 1000000
# Maximum number of socket buffers per core
# Set this to small value if there are many idle connections
max_num_buffers = 1000000
# TCO timeout seconds
# (tcp_timeout = -1 can disable the timeout check)
tcp_timeout = 30

mTCP APP call path(epwget):


epwget.c
main()
773         ret = mtcp_init("/etc/mtcp/config/epwget.conf");  //initialize mTCP and i/o modules

799                 if (pthread_create(&app_thread[i],
800                                         NULL, RunWgetMain, (void *)&cores[i])) { //app main thread

}
523 void *
524 RunWgetMain(void *arg)
525 {
526         thread_context_t ctx;
  540         mtcp_core_affinitize(core);
541
542         ctx = CreateContext(core); //create app context with mTCP
675 }

mtcp/src/core.c

1099 mctx_t
1100 mtcp_create_context(int cpu)
1101 {
1141         /* Wake up mTCP threads (wake up I/O threads) */
1142         if (current_iomodule_func == &dpdk_module_func) {
1143 #ifndef DISABLE_DPDK
1144                 int master;
1145                 master = rte_get_master_lcore();
1146                 if (master == cpu) {
1147                         lcore_config[master].ret = 0;
1148                         lcore_config[master].state = FINISHED;
1149                         if (pthread_create(&g_thread[cpu],
1150                                            NULL, MTCPRunThread, (void *)mctx) != 0) {
1151                                 TRACE_ERROR("pthread_create of mtcp thread failed!\n");
1152                                 return NULL;
1153                         }
1154                 } else
1155                         rte_eal_remote_launch(MTCPDPDKRunThread, mctx, cpu); 1156 #endif /* !DISABLE_DPDK */

1179         return mctx;
1180 }

1363 int
1364 mtcp_init(char *config_file)
1365 {
1391         ret = LoadConfiguration(config_file); //parse config and initilize DPDK environment
1399         ap = CreateAddressPool(CONFIG.eths[0].ip_addr, 1);
1428         current_iomodule_func->load_module(); //load dpdk i/o module
1429
1430         return 0;
1431 }

 mtcp/src/config.c
555 int
556 LoadConfiguration(char *fname)
557 {
604                 if (ParseConfiguration(p) < 0)
611 }

453 static int
454 ParseConfiguration(char *line)
455 {
534         } else if (strcmp(p, "port") == 0) {
535                 if(strncmp(q, ALL_STRING, sizeof(ALL_STRING)) == 0) {
536                         SetInterfaceInfo(q); //DPDK rte_eal_init
537                 } else {
538                         SetInterfaceInfo(line + strlen(p) + 1);
539                 }
540         } else if (strcmp(p, "io") == 0) {
541                 AssignIOModule(q); //Assign IO modules like psio/dpdk/netmap

553 }

mtcp/src/io_module.c
 66 int
 67 SetInterfaceInfo(char* dev_name_list)
 68 {

151         } else if (current_iomodule_func == &dpdk_module_func) {
152 #ifndef DISABLE_DPDK

173                 sprintf(external_driver, "%s", "/usr/src/mtcp/dpdk/lib/librte_pmd_e1000.so");
//load the specific NIC PMD driver if DPDK compiled as combined shared lib
1
175                 /* initialize the rte env first, what a waste of implementation effort!  */
176                 char *argv[] = {"",
177                                 "-c", 178                                 cpumaskbuf,
179                                 "-n",
180                                 mem_channels,
181                                 "-d",
182                                 external_driver,
183                                 "--proc-type=auto",
184                                 ""
185                 };
186                 const int argc = 8;

188                 /*
189                  * re-set getopt extern variable optind.
190                  * this issue was a bitch to debug
191                  * rte_eal_init() internally uses getopt() syscall
192                  * mtcp applications that also use an `external' getopt
193                  * will cause a violent crash if optind is not reset to zero
194                  * prior to calling the func below...
195                  * see man getopt(3) for more details
196                  */
197                 optind = 0;
198
199                 /* initialize the dpdk eal env */
200                 ret = rte_eal_init(argc, argv); //initialize DPDK environment

286 #endif /* !DISABLE_DPDK */ 287         }
288         return 0;
289 }

mtcp/src/core.c
1012 static void *
1013 MTCPRunThread(void *arg)
1014 {
1022         mtcp_core_affinitize(cpu);
1034         mtcp = ctx->mtcp_manager = InitializeMTCPManager(ctx);

1040         /* assign mtcp context's underlying I/O module */
1041         mtcp->iom = current_iomodule_func;

1043         /* I/O initializing */
1044         mtcp->iom->init_handle(ctx);

1085         /* start the main loop */
1086         RunMainLoop(ctx); //main packet receiving/sending loop

1090         return 0;
1091 }

 720 static void
 721 RunMainLoop(struct mtcp_thread_context *ctx)  722 {

735         while ((!ctx->done || mtcp->flow_cnt) && !ctx->exit) {

 744                 for (rx_inf = 0; rx_inf < CONFIG.eths_num; rx_inf++) {
 745
 746                         recv_cnt = mtcp->iom->recv_pkts(ctx, rx_inf); //receive packets
 747                         STAT_COUNT(mtcp->runstat.rounds_rx_try);
 748
 749                         for (i = 0; i < recv_cnt; i++) {
 750                                 uint16_t len;
 751                                 uint8_t *pktbuf;
 752                                 pktbuf = mtcp->iom->get_rptr(mtcp->ctx, rx_inf, i, &len); //
 753                                 ProcessPacket(mtcp, rx_inf, ts, pktbuf, len); //process receiving packet
 754                         }
 755                 }

 791                 if (mtcp->flow_cnt > 0) {
 792                         /* hadnle stream queues  */
 793                         HandleApplicationCalls(mtcp, ts);
 794                 }
 795
 796                 WritePacketsToChunks(mtcp, ts);
 797
 798                 /* send packets from write buffer */  799                 /* send until tx is available */
 800                 for (tx_inf = 0; tx_inf < CONFIG.eths_num; tx_inf++) {
 801                         mtcp->iom->send_pkts(ctx, tx_inf); //send packets
 802                 }

830 }

MoonGen: fully scriptable high-speed packet generator built on DPDK and LuaJIT.

https://github.com/emmericp/MoonGen

Design


 



 

 

 

 

 

 

 


Userscript Master



Userscript slave



 


clone of MoonGen in ES codeshare http://git.es.f5net.com/index.cgi/codeshare/tree/vli/MoonGen

  improved tcp syn flooding with random src ip and src port
  added DNS flooding script to test Victoria2 DNS DDOS Hardware protection
  added icmp echo flooding

MoonGen Installation:

1.  Install the dependencies (see below)
2.  git submodule update --init
3.  ./build.sh
4.  ./setup-hugetlbfs.sh
5.  Run MoonGen from the build directory
How to Run MoonGen script:
command syntax: build/MoonGen examples/ <# of src ip>  
#build/MoonGen examples/dns-flood-victoria.lua 0 10.0.0.1 16000000 10000

Hardware SPEC:   

Dell Poweredge R710  72G MEM, 16 core, Intel NIC 82599                                  
DUT: Victoria B2250 Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz 20 cores 64G MEM

Dell PowerEdge R210 II (used $300) 8 core, 32G MEM Intel 1G NIC I350            
DUT: BIGIP KVM VE CPU: QEMU Virtual CPU version (cpu64-rhel6) 4 cores 16G MEM

Load Test Example


1, DNS flooding without HW acceleration

#build/MoonGen examples/dns-flood-victoria.lua 0 10.0.0.1 16000000 10000
Device: id=0] Sent 13710082176 packets, current rate 4.51 Mpps, 3246.01 MBit/s, 3967.35 MBit/s
wire rate.
[Device: id=0] Sent 13714591360 packets, current rate 4.51 Mpps, 3246.53 MBit/s, 3967.98 MBit/s
wire rate.
[Device: id=0] Sent 13719099520 packets, current rate 4.51 Mpps, 3245.79 MBit/s, 3967.08 MBit/s
wire rate.
 
top - 12:07:02 up 1 day, 20:38,  1 user,  load average: 5.22, 7.46, 9.27
 Tasks: 777 total,  19 running, 758 sleeping,   0 stopped,   0 zombie
 Cpu(s): 50.6%us, 40.2%sy,  0.0%ni,  9.2%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Mem:  66080376k total, 65722732k used,   357644k free,   108700k buffers
 Swap:  5242872k total,        0k used,  5242872k free,  4048612k cached
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 17859 root       1 -19 57.9g 145m 123m R 100.8  0.2  56:44.33 tmm.0 -T 10 --tmid
 17741 root       1 -19 57.9g 145m 123m R 100.5  0.2  58:08.51 tmm.0 -T 10 --tmid
 17853 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:46.73 tmm.0 -T 10 --tmid
 17854 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:46.97 tmm.0 -T 10 --tmid
 17855 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:46.06 tmm.0 -T 10 --tmid
 17856 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:37.67 tmm.0 -T 10 --tmid  17857 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:45.54 tmm.0 -T 10 --tmid
 17858 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:45.70 tmm.0 -T 10 --tmid
 17860 root       1 -19 57.9g 145m 123m R 100.5  0.2  56:45.65 tmm.0 -T 10 --tmid
 17852 root       1 -19 57.9g 145m 123m R 100.2  0.2  56:50.91 tmm.0 -T 10 --tmid
 20110 root      RT   0     0    0    0 S 80.6  0.0   0:27.55 [enforcer/11]
 20111 root      RT   0     0    0    0 R 80.6  0.0   0:27.56 [enforcer/15]
 20116 root      RT   0     0    0    0 R 80.6  0.0   0:27.50 [enforcer/13]
 20108 root      RT   0     0    0    0 R 80.2  0.0   0:27.55 [enforcer/19]
 20109 root      RT   0     0    0    0 R 80.2  0.0   0:27.57 [enforcer/17]
 20112 root      RT   0     0    0    0 S 80.2  0.0   0:27.55 [enforcer/5]
 20113 root      RT   0     0    0    0 R 80.2  0.0   0:27.52 [enforcer/1]

------------------------------------------------------------------
Ltm::Virtual Server: vs_dns_10g
------------------------------------------------------------------
Status
  Availability     : unknown
  State            : enabled
  Reason           : The children pool member(s) either don't have service check
  CMP              : enabled
  CMP Mode         : all-cpus
  Destination      : 10.3.3.249:53
  PVA Acceleration : none
Traffic                             ClientSide  Ephemeral  General
  Bits In                                11.5G          0        -   Bits Out                               16.7G          0        -
  Packets In                             20.0M          0        -
  Packets Out                            20.0M          0        -
  Current Connections                    27.1M          0        -
  Maximum Connections                    27.1M          0        -
  Total Connections                      28.8M          0        -

2, DNS flooding with HW acceleration

#build/MoonGen examples/dns-flood-victoria.lua 0 10.0.0.1 16000000 10000
 Device: id=0] Sent 13710082176 packets, current rate 4.51 Mpps, 3246.01 MBit/s, 3967.35 MBit/s
wire rate.
 [Device: id=0] Sent 13714591360 packets, current rate 4.51 Mpps, 3246.53 MBit/s, 3967.98 MBit/s
wire rate.
 [Device: id=0] Sent 13719099520 packets, current rate 4.51 Mpps, 3245.79 MBit/s, 3967.08 MBit/s
wire rate.
https://docs.f5net.com/display/PDDESIGN/DNS+DDoS+HW+Acceleration+-+Validation
sys fpga firmware-config {
    type l7-intelligent-fpga
}
ltm profile dns /Common/dns_fpga {
    app-service none
    enable-hardware-query-validation yes
    enable-hardware-response-cache yes
}
ltm virtual /Common/vs_dns_10g {
    destination /Common/10.3.3.249:53
    ip-protocol udp
    mask 255.255.255.255     profiles {
        /Common/dns_fpga { }
        /Common/udp_immediate { }
    }
    rules {
        /Common/dns_responder
    }
    source 0.0.0.0/0
    translate-address enabled
    translate-port enabled
}

top - 14:51:05 up  3:30,  1 user,  load average: 0.12, 0.05, 0.01
Tasks: 771 total,   1 running, 770 sleeping,   0 stopped,   0 zombie
Cpu(s):  4.2%us,  0.5%sy,  0.0%ni, 95.2%id,  0.0%wa,  0.1%hi,  0.0%si,  0.0%st
Mem:  66080272k total, 63094488k used,  2985784k free,    61152k buffers
Swap:  5242876k total,        0k used,  5242876k free,  1352852k cached
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 6428 root       1 -19 58.4g 151m 122m S 12.6  0.2   3:19.62 tmm.0
 6435 root       1 -19 58.4g 151m 122m S 11.3  0.2   2:42.67 tmm.4
 6432 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:44.57 tmm.1
 6433 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:42.78 tmm.2
 6434 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:40.69 tmm.3
 6436 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:41.53 tmm.5
 6437 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:42.68 tmm.6  6438 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:40.92 tmm.7
 6439 root       1 -19 58.4g 151m 122m S 10.9  0.2   2:41.87 tmm.8
 6440 root       1 -19 58.4g 151m 122m S 10.6  0.2   2:41.49 tmm.9
28351 root     -91   0 97592  81m  31m S  2.0  0.1   7:00.29 bcmINTR
28589 root      20   0 97592  81m  31m S  2.0  0.1   5:43.36 bcmCNTR.0

profile_dns_stat
name               vs_name                    queries          drops         hw_malformed        hw_inspected            hw_cache_lookups         hw_cache_responses
---------------- ------------------ ------- ----- ------------ ------------ ---------------- --------------------------------------------------------------------------------------------------
/Common/dns_fpga   /Common/vs_dns_10g      6981      0         0                         29727032                 297          26590                         29720435

3, SYN flooding without hardware acceleration


#build/MoonGen examples/l3-tcp-syn-flood.lua 0 10.0.0.1 16000000 10000

 [Device: id=0] Sent 7061632 packets, current rate 7.06 Mpps, 3615.47 MBit/s, 4745.31 MBit/s wire
rate.
 ltm profile fastl4 /Common/fl4_fpga {
     app-service none
     defaults-from /Common/fastL4
     hardware-syn-cookie disabled
     pva-offload-dynamic disabled
     software-syn-cookie enabled
 }
top - 10:53:51 up 42 min,  1 user,  load average: 0.24, 0.23, 0.65
Tasks: 769 total,  10 running, 759 sleeping,   0 stopped,   0 zombie Cpu(s): 35.4%us,  1.7%sy,  0.1%ni, 62.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  66080376k total, 62740700k used,  3339676k free,    45784k buffers
Swap:  5242872k total,        0k used,  5242872k free,  1199508k cached
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
19290 root       1 -19 57.9g 145m 123m R 71.5  0.2   0:14.38 tmm.0 -T 10 --tmid
CPU
 19291 root       1 -19 57.9g 145m 123m R 70.2  0.2   0:14.53 tmm.0 -T 10 --tmid
 19293 root       1 -19 57.9g 145m 123m S 70.2  0.2   0:14.37 tmm.0 -T 10 --tmid
 19267 root       1 -19 57.9g 145m 123m R 69.8  0.2   0:30.32 tmm.0 -T 10 --tmid
 19292 root       1 -19 57.9g 145m 123m R 69.8  0.2   0:14.38 tmm.0 -T 10 --tmid
 19298 root       1 -19 57.9g 145m 123m R 69.8  0.2   0:14.72 tmm.0 -T 10 --tmid
 19295 root       1 -19 57.9g 145m 123m R 69.5  0.2   0:14.73 tmm.0 -T 10 --tmid
 19296 root       1 -19 57.9g 145m 123m R 69.5  0.2   0:14.03 tmm.0 -T 10 --tmid
 19297 root       1 -19 57.9g 145m 123m R 69.2  0.2   0:14.14 tmm.0 -T 10 --tmid
 19294 root       1 -19 57.9g 145m 123m R 65.2  0.2   0:13.31 tmm.0 -T 10 --tmid

4 SYN flooding with HW acceleration

 #build/MoonGen examples/l3-tcp-syn-flood.lua 0 10.0.0.1 16000000 10000

  [Device: id=0] Sent 7061632 packets, current rate 7.06 Mpps, 3615.47 MBit/s, 4745.31 MBit/s wire rate.

ltm profile fastl4 /Common/fl4_fpga {
    app-service none
    defaults-from /Common/fastL4
    hardware-syn-cookie enabled
    pva-offload-dynamic enabled
    software-syn-cookie enabled
}

 top - 10:50:08 up 38 min,  1 user,  load average: 0.06, 0.36, 0.81
 Tasks: 769 total,   1 running, 768 sleeping,   0 stopped,   0 zombie
 Cpu(s):  0.8%us,  0.2%sy,  0.0%ni, 98.5%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
 Mem:  66080376k total, 62740552k used,  3339824k free,    45324k buffers
 Swap:  5242872k total,        0k used,  5242872k free,  1199492k cached
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 19267 root       1 -19 57.9g 145m 123m S  3.6  0.2   0:11.87 tmm
 19293 root       1 -19 57.9g 145m 123m S  1.3  0.2   0:01.72 tmm
 19296 root       1 -19 57.9g 145m 123m S  1.3  0.2   0:01.36 tmm
 19297 root       1 -19 57.9g 145m 123m S  1.3  0.2   0:01.37 tmm
 19290 root       1 -19 57.9g 145m 123m S  1.0  0.2   0:01.38 tmm
 19292 root       1 -19 57.9g 145m 123m S  1.0  0.2   0:01.73 tmm
 19294 root       1 -19 57.9g 145m 123m S  1.0  0.2   0:01.35 tmm
 19295 root       1 -19 57.9g 145m 123m S  1.0  0.2   0:02.12 tmm
 19298 root       1 -19 57.9g 145m 123m S  1.0  0.2   0:02.11 tmm
 19291 root       1 -19 57.9g 145m 123m S  0.7  0.2   0:01.73 tmm


5, 10M concurrent HTTP connection


#epwget 10.3.3.249/ 160000000 -N 16 –c 10000000

 [CPU 0] dpdk0 flows: 625000, RX:   96382(pps) (err:     0),  0.10(Gbps), TX:  413888(pps),  0.64(Gbps)
 [CPU 1] dpdk0 flows: 625000, RX:  101025(pps) (err:     0),  0.10(Gbps), TX:  398592(pps),  0.61(Gbps)
 [CPU 2] dpdk0 flows: 625000, RX:  106882(pps) (err:     0),  0.11(Gbps), TX:  418432(pps),  0.64(Gbps)
 [CPU 3] dpdk0 flows: 625000, RX:  101497(pps) (err:     0),  0.10(Gbps), TX:  405952(pps),  0.62(Gbps)
 [CPU 4] dpdk0 flows: 625000, RX:  107375(pps) (err:     0),  0.11(Gbps), TX:  427008(pps),  0.66(Gbps)  [CPU 5] dpdk0 flows: 625000, RX:   96012(pps) (err:     0),  0.10(Gbps), TX:  404352(pps),  0.62(Gbps)
 [CPU 6] dpdk0 flows: 625000, RX:  100834(pps) (err:     0),  0.10(Gbps), TX:  405504(pps),  0.62(Gbps)
 [CPU 7] dpdk0 flows: 625000, RX:  102572(pps) (err:     0),  0.11(Gbps), TX:  401024(pps),  0.62(Gbps)
 [CPU 8] dpdk0 flows: 635366, RX:  111319(pps) (err:     0),  0.12(Gbps), TX:  410880(pps),  0.63(Gbps)
 [CPU 9] dpdk0 flows: 625000, RX:  102179(pps) (err:     0),  0.11(Gbps), TX:  391104(pps),  0.60(Gbps)
 [CPU10] dpdk0 flows: 625000, RX:   98014(pps) (err:     0),  0.10(Gbps), TX:  408320(pps),  0.63(Gbps)
 [CPU11] dpdk0 flows: 625000, RX:  102712(pps) (err:     0),  0.11(Gbps), TX:  398976(pps),  0.61(Gbps)
 [CPU12] dpdk0 flows: 625000, RX:  105891(pps) (err:     0),  0.11(Gbps), TX:  415616(pps),  0.64(Gbps)
 [CPU13] dpdk0 flows: 625000, RX:   97728(pps) (err:     0),  0.10(Gbps), TX:  390592(pps),  0.60(Gbps)
 [CPU14] dpdk0 flows: 625001, RX:  100570(pps) (err:     0),  0.10(Gbps), TX:  407872(pps),  0.63(Gbps)
 [CPU15] dpdk0 flows: 625000, RX:  103412(pps) (err:     0),  0.11(Gbps), TX:  391296(pps),  0.60(Gbps)
 [ ALL ] dpdk0 flows: 10010366, RX: 1634404(pps) (err:     0),  1.69(Gbps), TX: 6489408(pps),  9.96(Gbps)

top - 15:25:26 up 23:57,  1 user,  load average: 0.16, 0.33, 0.43
 Tasks: 778 total,  17 running, 761 sleeping,   0 stopped,   0 zombie
 Cpu(s): 45.1%us, 30.6%sy,  0.0%ni, 24.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Mem:  66080376k total, 62855960k used,  3224416k free,   136316k buffers
 Swap:  5242872k total,        0k used,  5242872k free,  1182216k cached
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 17283 root       1 -19 57.9g 145m 123m R 94.1  0.2   1322:36 tmm.0 -T 10 --tmid
 17286 root       1 -19 57.9g 145m 123m R 94.1  0.2   1322:37 tmm.0 -T 10 --tmid
 17281 root       1 -19 57.9g 145m 123m R 93.8  0.2   1322:39 tmm.0 -T 10 --tmid
 17284 root       1 -19 57.9g 145m 123m R 93.8  0.2   1322:37 tmm.0 -T 10 --tmid
 17282 root       1 -19 57.9g 145m 123m R 93.4  0.2   1322:37 tmm.0 -T 10 --tmid
 17287 root       1 -19 57.9g 145m 123m R 93.4  0.2   1322:37 tmm.0 -T 10 --tmid
 17289 root       1 -19 57.9g 145m 123m R 93.4  0.2   1322:36 tmm.0 -T 10 --tmid
 17043 root       1 -19 57.9g 145m 123m R 92.8  0.2   1325:48 tmm.0 -T 10 --tmid
 17285 root       1 -19 57.9g 145m 123m R 92.1  0.2   1322:37 tmm.0 -T 10 --tmid
 31507 root      RT   0     0    0    0 R 32.0  0.0   0:00.97 [enforcer/19]
 31508 root      RT   0     0    0    0 R 32.0  0.0   0:00.97 [enforcer/13]
 31509 root      RT   0     0    0    0 R 32.0  0.0   0:00.97 [enforcer/15]
 31510 root      RT   0     0    0    0 S 31.7  0.0   0:00.96 [enforcer/9]
 31511 root      RT   0     0    0    0 S 31.7  0.0   0:00.96 [enforcer/7]
 31512 root      RT   0     0    0    0 R 31.4  0.0   0:00.95 [enforcer/3]
 31515 root      RT   0     0    0    0 S 16.8  0.0   0:00.51 [enforcer/1]

[root@localhost:/S1-green-P:Active:Standalone] config # tail -f /var/log/ltm
 Nov  4 15:25:29 slot1/bigip1 warning tmm7[17043]: 011e0003:4: Aggressive mode sweeper:
/Common/default-eviction-policy (70000000002d6) (global memory) 9864 Connections killed
 Nov  4 15:25:29 slot1/bigip1 warning tmm7[17043]: 011e0002:4:
sweeper_policy_bind_deactivation_update: Aggressive mode /Common/default-eviction-policy
deactivated (70000000002d6) (global memory). (12793204/15051776 pages)
 Nov  4 15:25:29 slot1/bigip1 warning tmm6[17043]: 011e0003:4: Aggressive mode sweeper:
/Common/default-eviction-policy (60000000002d2) (global memory) 10122 Connections killed
 Nov  4 15:25:29 slot1/bigip1 warning tmm6[17043]: 011e0002:4:
sweeper_policy_bind_deactivation_update: Aggressive mode /Common/default-eviction-policy
deactivated (60000000002d2) (global memory). (12792703/15051776 pages)
 Nov  4 15:25:29 slot1/bigip1 warning tmm3[17043]: 011e0003:4: Aggressive mode sweeper:
/Common/default-eviction-policy (30000000002de) (global memory) 10877 Connections killed
 Nov  4 15:25:29 slot1/bigip1 warning tmm3[17043]: 011e0002:4:
sweeper_policy_bind_deactivation_update: Aggressive mode /Common/default-eviction-policy
deactivated (30000000002de) (global memory). (12787088/15051776 pages)
 Nov  4 15:25:29 slot1/bigip1 warning tmm4[17043]: 011e0003:4: Aggressive mode sweeper:
/Common/default-eviction-policy (40000000002c2) (global memory) 10306 Connections killed  Nov  4 15:25:29 slot1/bigip1 warning tmm4[17043]: 011e0002:4:
sweeper_policy_bind_deactivation_update: Aggressive mode /Common/default-eviction-policy
deactivated (40000000002c2) (global memory). (12787088/15051776 pages)


Every 1.0s: tmsh show ltm virtual vs_http_10g           Wed Nov  4 15:27:15 2015
Availability     : unknown
  State            : enabled
  Reason           : The children pool member(s) either don't have service check
ing enabled, or service check results are not available yet
  CMP              : enabled
  CMP Mode         : all-cpus
  Destination      : 10.3.3.249:80
  PVA Acceleration : none
Traffic                             ClientSide  Ephemeral  General
  Bits In                               329.8G          0        -
  Bits Out                             90.4G          0        -
  Packets In                          287.6M          0        -
   Packets Out                       150.2M          0        -
   Current Connections         6.1M          0        -
   Maximum Connections     6.7M          0        -
   Total Connections               39.8M          0        -

mTCP perf top output ~70% cycles in Userspace

 Samples: 1M of event 'cycles', Event count (approx.): 441906428558
   8.25%  epwget              [.] SendTCPPacket
   7.93%  [kernel]            [k] _raw_spin_lock    7.16%  epwget              [.] GetRSSCPUCore
   7.15%  epwget              [.] IPOutput
   4.26%  libc-2.19.so        [.] memset
   4.10%  epwget              [.] ixgbe_xmit_pkts
   3.62%  [kernel]            [k] clear_page_c
   3.26%  epwget              [.] WriteTCPControlList
   3.24%  [vdso]              [.] 0x0000000000000cf9
   2.95%  epwget              [.] AddtoControlList
   2.70%  epwget              [.] MTCPRunThread
   2.66%  epwget              [.] HandleRTO
   2.51%  epwget              [.] CheckRtmTimeout
   2.10%  libpthread-2.19.so  [.] pthread_mutex_unlock
   1.83%  epwget              [.] dpdk_send_pkts
   1.68%  epwget              [.] HTInsert
   1.65%  epwget              [.] CreateTCPStream
   1.42%  epwget              [.] MPAllocateChunk
   1.29%  epwget              [.] TCPCalcChecksum
   1.24%  epwget              [.] dpdk_recv_pkts
   1.20%  epwget              [.] mtcp_getsockopt
   1.12%  epwget              [.] rx_recv_pkts

6, SSL DDOS test using mTCP



 top - 09:10:21 up 22:58,  1 user,  load average: 10.45, 4.43, 1.67
 Tasks: 782 total,  19 running, 763 sleeping,   0 stopped,   0 zombie
 Cpu(s): 50.6%us, 40.1%sy,  0.1%ni,  9.1%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
 Mem:  66080376k total, 62923192k used,  3157184k free,   138624k buffers
 Swap:  5242872k total,        0k used,  5242872k free,  1259132k cached
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 21480 root       1 -19 57.9g 145m 123m R 100.0  0.2  81:24.41 tmm
 21503 root       1 -19 57.9g 145m 123m R 100.0  0.2  48:05.30 tmm
 21504 root       1 -19 57.9g 145m 123m R 100.0  0.2  47:23.12 tmm
 21505 root       1 -19 57.9g 145m 123m R 100.0  0.2  47:06.70 tmm
 21506 root       1 -19 57.9g 145m 123m R 100.0  0.2  46:55.21 tmm
 21507 root       1 -19 57.9g 145m 123m R 100.0  0.2  46:12.27 tmm
 21508 root       1 -19 57.9g 145m 123m R 100.0  0.2  46:56.27 tmm
 21509 root       1 -19 57.9g 145m 123m R 100.0  0.2  47:01.32 tmm
 21510 root       1 -19 57.9g 145m 123m R 100.0  0.2  46:48.54 tmm
 21511 root       1 -19 57.9g 145m 123m R 100.0  0.2  47:06.64 tmm
   1670 root      RT   0     0    0    0 R 80.2  0.0   2:07.03 enforcer/15
  1673 root      RT   0     0    0    0 R 80.2  0.0   2:07.03 enforcer/3
  1677 root      RT   0     0    0    0 R 80.2  0.0   2:07.02 enforcer/13
  1671 root      RT   0     0    0    0 S 79.9  0.0   2:07.04 enforcer/19
  1672 root      RT   0     0    0    0 R 79.9  0.0   2:07.02 enforcer/5


7, ApacheBench(ab) mTCP port https test to Victoria blade
 #ab -n 16000 -N 16 -c 8000  -L 64 https://10.3.3.249/

 ---------------------------------------------------------------------------------
 Loading mtcp configuration from : /etc/mtcp/config/mtcp.conf
 Loading interface setting
 EAL: Detected lcore 0 as core 0 on socket 0
 .................................................  Checking link statusdone
 Port 0 Link Up - speed 10000 Mbps - full-duplex
 Benchmarking 10.3.3.249 (be patient)
 CPU6 connecting to port 443
 CPU7 connecting to port 443
 CPU8 connecting to port 443
 CPU9 connecting to port 443
 CPU10 connecting to port 443
 CPU5 connecting to port 443
 CPU11 connecting to port 443
 CPU12 connecting to port 443
 CPU13 connecting to port 443
 CPU14 connecting to port 443
 CPU15 connecting to port 443
 CPU4 connecting to port 443
 CPU2 connecting to port 443
 CPU3 connecting to port 443
 CPU1 connecting to port 443
 CPU0 connecting to port 443
 .......................................
 [ ALL ] dpdk0 flows:   5016, RX:    9651(pps) (err:     0),  0.04(Gbps), TX:   14784(pps),  0.02(Gbps)
 ------------------------------------------------------------------
 Ltm::Virtual Server: vs_https
 ------------------------------------------------------------------
 CMP              : enabled    CMP Mode         : all-cpus
   Destination      : 10.3.3.249:443
   PVA Acceleration : none
 Traffic                             ClientSide  Ephemeral  General
   Bits In                                49.2G          0        -
   Bits Out                               71.0G          0        -
   Packets In                             47.1M          0        -
   Packets Out                            30.4M          0        -
   Current Connections                     6.3K          0        -
   Maximum Connections                   146.0K          0        -
   Total Connections                       4.3M          0        -
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 12864 root       1 -19 57.9g 145m 123m S  5.0  0.2  53:09.44 tmm
 13087 root       1 -19 57.9g 145m 123m S  3.0  0.2  14:01.00 tmm
 13088 root       1 -19 57.9g 145m 123m S  3.0  0.2  13:32.00 tmm
 13091 root       1 -19 57.9g 145m 123m S  3.0  0.2  13:25.59 tmm
 13093 root       1 -19 57.9g 145m 123m S  3.0  0.2  13:34.57 tmm
 13094 root       1 -19 57.9g 145m 123m S  3.0  0.2  13:46.66 tmm
 13086 root       1 -19 57.9g 145m 123m S  2.6  0.2  14:09.38 tmm
 13089 root       1 -19 57.9g 145m 123m S  2.6  0.2  13:42.05 tmm
 13090 root       1 -19 57.9g 145m 123m S  2.6  0.2  13:47.88 tmm
 13092 root       1 -19 57.9g 145m 123m S  2.3  0.2  13:40.11 tmm

8, ApacheBench(ab) mTCP port https test to BIGIP VE(KVM)
 #ab -n 1000000 -c 8000 -N 8 -L 64  https://10.9.3.6/

Checking link status......................................done
Port 0 Link Up - speed 1000 Mbps - full-duplex
Benchmarking 10.9.3.6 (be patient)
CPU6 connecting to port 443
CPU7 connecting to port 443
CPU5 connecting to port 443
CPU4 connecting to port 443
CPU3 connecting to port 443
CPU2 connecting to port 443
CPU1 connecting to port 443
CPU0 connecting to port 443
[ ALL ] dpdk0 flows:   8000, RX:   13443(pps) (err:     0),  0.01(Gbps), TX:   13953(pps),  0.01(Gbps)

top - 13:12:22 up 4 min,  1 user,  load average: 3.34, 2.01, 0.82
Tasks: 395 total,   4 running, 391 sleeping,   0 stopped,   0 zombie
Cpu(s): 13.2%us,  6.5%sy,  0.0%ni, 64.5%id, 15.6%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  14403128k total, 14060912k used,   342216k free,    22400k buffers
Swap:  1048568k total,        0k used,  1048568k free,   863780k cached
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  P COMMAND
13954 root      RT   0 12.0g 124m 104m R 92.4  0.9   0:27.17 0 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
 14125 root      RT   0 12.0g 124m 104m R 92.0  0.9   0:13.28 1 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
 14126 root      RT   0 12.0g 124m 104m S 92.0  0.9   0:12.36 2 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
 14127 root      RT   0 12.0g 124m 104m S 92.0  0.9   0:13.15 3 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088  ------------------------------------------------------------------
 Ltm::Virtual Server: vs_https
 ------------------------------------------------------------------
 Status
 Traffic                             ClientSide  Ephemeral  General
   Bits In                               428.8M          0        -
   Bits Out                              786.2M          0        -
   Packets In                            505.9K          0        -
   Packets Out                           423.1K          0        -
   Current Connections                     9.4K          0        -
   Maximum Connections                    12.9K          0        -


9 Generate  TCP/HTTP connection with random src MAC
http://sourceforge.net/p/curl-loader/mailman/message/33614941/







10 ICMP ping flooding to BIGIP VE
build/MoonGen examples/icmp-flood.lua 0 10.0.0.1 16000000 10000
 top - 12:10:54 up  1:55,  1 user,  load average: 0.24, 0.06, 0.02
 Tasks: 381 total,   2 running, 379 sleeping,   0 stopped,   0 zombie
 Cpu0  : 16.2%us, 11.3%sy,  0.0%ni, 13.1%id,  0.0%wa,  0.3%hi, 59.1%si,  0.0%st
 Cpu1  :  2.1%us,  2.4%sy,  0.0%ni, 94.6%id,  0.0%wa,  0.0%hi,  0.3%si,  0.6%st
 Cpu2  :  3.5%us,  3.2%sy,  0.0%ni, 90.6%id,  0.0%wa,  0.0%hi,  1.8%si,  0.9%st
 Cpu3  :  1.2%us,  1.8%sy,  0.3%ni, 96.0%id,  0.0%wa,  0.0%hi,  0.3%si,  0.3%st
 Mem:  14403128k total, 14267112k used,   136016k free,    22252k buffers
 Swap:  1048568k total,     1224k used,  1047344k free,   571908k cached
   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  P COMMAND
  2889 root      RT   0 12.0g 124m 104m S 154.7  0.9   3:58.44 0 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
  3054 root      RT   0 12.0g 124m 104m S  8.8  0.9   2:48.53 3 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
  3053 root      RT   0 12.0g 124m 104m S  8.5  0.9   2:12.29 2 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088
  3052 root      RT   0 12.0g 124m 104m R  7.6  0.9   2:06.63 1 tmm.0 -T 4 --tmid 0 --npus 4 --platform
Z100 -m -s 12088

Technical tips for load generation 1 mTCP thread pre-allocate memory pools for TCP send and receive buffer from configured maximum  number of buffers.
   when the load generator has limited memory, it is recommended to reduce the size of TCP send and  receive buffer and  number of buffers in application configuration file.
  also configure the BIGIP DUT to respond small packet size ( < 64 bytes) because large response
payload size would trigger mTCP payload merge length error
for example: setting TCP receive and send buffer in epwget.conf
# Receive buffer size of sockets
rcvbuf = 1024
# Send buffer size of sockets
sndbuf = 1024

 mTCP pre-allocate memory (mtcp/src/core.c)
static mtcp_manager_t
InitializeMTCPManager(struct mtcp_thread_context* ctx)
{

        mtcp->flow_pool = MPCreate(sizeof(tcp_stream),
                                                                sizeof(tcp_stream) * CONFIG.max_concurrency,
IS_HUGEPAGE);
        if (!mtcp->flow_pool) {
                CTRACE_ERROR("Failed to allocate tcp flow pool.\n");
                return NULL;
        }
        mtcp->rv_pool = MPCreate(sizeof(struct tcp_recv_vars),
                        sizeof(struct tcp_recv_vars) * CONFIG.max_concurrency, IS_HUGEPAGE);
        if (!mtcp->rv_pool) {
                CTRACE_ERROR("Failed to allocate tcp recv variable pool.\n");                 return NULL;
        }
        mtcp->sv_pool = MPCreate(sizeof(struct tcp_send_vars),
                        sizeof(struct tcp_send_vars) * CONFIG.max_concurrency, IS_HUGEPAGE);
        if (!mtcp->sv_pool) {
                CTRACE_ERROR("Failed to allocate tcp send variable pool.\n");
                return NULL;
        }

        mtcp->rbm_snd = SBManagerCreate(CONFIG.sndbuf_size, CONFIG.max_num_buffers);
        if (!mtcp->rbm_snd) {
                CTRACE_ERROR("Failed to create send ring buffer.\n");
                return NULL;
        }
        mtcp->rbm_rcv = RBManagerCreate(CONFIG.rcvbuf_size, CONFIG.max_num_buffers);
        if (!mtcp->rbm_rcv) {
                CTRACE_ERROR("Failed to create recv ring buffer.\n");
                return NULL;
        }


References
http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html
http://www.dpdk.org/
http://www.ndsl.kaist.edu/~kyoungsoo/papers/mtcp.pdf
https://github.com/eunyoung14/mtcp
https://github.com/emmericp/MoonGen
http://git.es.f5net.com/index.cgi/codeshare/tree/vli/mtcp  (git log --author="Vincent Li“)
http://git.es.f5net.com/index.cgi/codeshare/tree/vli/MoonGen

Thursday, October 8, 2015

2.5 million tcp/http connection with mTCP and DPDK on a Dell Poweredge R210 8 core 32G ram and Intel NIC I350

Background:

The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution


 Problem:

http://www.slideshare.net/garyachy/dpdk-44585840?qid=254b419f-1d44-44f1-99c4-87f13b7d5fe4&v=default&b=&from_search=8

Packet processing in Linux:  NIC RX/TX queues <--------->Ring buffers<---------->Driver<-------->Socket<--------->App


  • System calls
  • Context switching on blocking I/O
  • Data Copying from kernel to user space
  • Interrupt handing in kernel

Expense of sendto :

  • sendto -  system call:  96ns
  • sosend_dgram - lock sock_buff, alloc mbuf, copy in: 137ns
  • udp_output - UDP header setup: 57ns
  • ip_output - route lookup, ip header setup: 198ns
  • ether_output - MAC lookup, MAC header setup: 162ns
  • ixgbe_xmit - device programing: 220ns
Total: 950ns

Solution:
Packet processing with DPDK
NIC RX/TX queues <------->Ring buffers<---------->DPDK<------------->App

  • Processor affinity (separate cores)
  • Huge pages( no swap, TLB)
  • UIO (no copying from kernel)
  • Polling (no interrupts overhead)
  • Lockless synchronization(avoid waiting)
  • Batch packets handling
  • SSE, NUMA awareness
UIO for example:
Kernel space (UIO framework) <------>/dev/uioX<------>userspace epoll/mmap<-------->App


Problem:
Limitaions of the Kernel's TCP stack
  • Lack of  connection locality
  • Shared file descriptor space
  • Inefficient per-packet processing
  • System call overhead
Solution:
  • Batching in packet I/O, TCP processing, user applications ( reduce system call overhead)
  • Connection locality on multicore systems - handling same connection on same core, avoid cache pollution (solve connection locality)
  • No descriptor sharing between mTCP thread

clone of mTCP 
clone addition:
  • Change apachebench configure script to compile with dpdk support
  • Ported SSL BIO onto mTCP to enable apachebench to perform SSL test
  • Add SSL clienthello stress test based on epwget and ssl-dos
  • Add command line option in epwget and apachebenach to enable source address pool to congest servers
  • Increase mTCP SYN BACKLOG to increase concurrent connection
  • Changed DPDK .config to compile DPDK as combined shared library
  • Tuned send/receive buffer size in epwget.conf to achieve 2.5 million concurrent connection on Dell Poweredge R210 II 32G MEM, 8 core, Intel NIC I350

HTTP Load testing example:
root@pktgen:/usr/src/mtcp# LD_LIBRARY_PATH=.:/usr/src/mtcp/dpdk/lib LD_PRELOAD=$* ./apps/example/epwget 10.9.3.6/ 16000000 -N 8 -c 2500000
Application configuration:
URL: /
# of total_flows: 16000000
# of cores: 8
Concurrency: 2500000
---------------------------------------------------------------------------------
Loading mtcp configuration from : epwget.conf
Loading interface setting
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 1 on socket 0
EAL: Detected lcore 3 as core 1 on socket 0
EAL: Detected lcore 4 as core 2 on socket 0
EAL: Detected lcore 5 as core 2 on socket 0
EAL: Detected lcore 6 as core 3 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fea80600000 (size = 0x200000)
EAL: Ask a virtual area of 0x7000000 bytes
EAL: Virtual area found at 0x7fea79400000 (size = 0x7000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fea79000000 (size = 0x200000)
EAL: Ask a virtual area of 0x38c00000 bytes
EAL: Virtual area found at 0x7fea40200000 (size = 0x38c00000)
EAL: Requesting 512 pages of size 2MB from socket 0
EAL: TSC frequency is ~3199969 KHz
EAL: open shared lib /usr/src/mtcp/dpdk/lib/librte_pmd_e1000.so
EAL: Master lcore 0 is ready (tid=825c7900;cpuset=[0])
EAL: lcore 4 is ready (tid=3dff5700;cpuset=[4])
EAL: lcore 5 is ready (tid=3d7f4700;cpuset=[5])
EAL: lcore 6 is ready (tid=3cff3700;cpuset=[6])
EAL: lcore 1 is ready (tid=3f7f8700;cpuset=[1])
EAL: lcore 2 is ready (tid=3eff7700;cpuset=[2])
EAL: lcore 3 is ready (tid=3e7f6700;cpuset=[3])
EAL: lcore 7 is ready (tid=3c7f2700;cpuset=[7])
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7fea8248b000
EAL: PCI memory mapped at 0x7fea82487000
PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7fea80500000
EAL: PCI memory mapped at 0x7fea82483000
PMD: eth_igb_dev_init(): port_id 1 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:01:00.2 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7fea80400000
EAL: PCI memory mapped at 0x7fea8247f000
PMD: eth_igb_dev_init(): port_id 2 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:01:00.3 on NUMA socket -1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI memory mapped at 0x7fea79300000
EAL: PCI memory mapped at 0x7fea8247b000
PMD: eth_igb_dev_init(): port_id 3 vendorID=0x8086 deviceID=0x1521
Total number of attached devices: 1
Interface name: dpdk0
Configurations:
Number of CPU cores available: 8
Number of CPU cores to use: 8
Maximum number of concurrency per core: 1000000
Number of source ip to use: 64
Maximum number of preallocated buffers per core: 1000000
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0
---------------------------------------------------------------------------------
Interfaces:
name: dpdk0, ifindex: 0, hwaddr: A0:36:9F:A1:4D:6C, ipaddr: 10.9.3.9, netmask: 255.255.255.0
Number of NIC queues: 8
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
Routes:
Destination: 10.9.3.0/24, Mask: 255.255.255.0, Masked: 10.9.3.0, Route: ifdx-0
Destination: 10.9.3.0/24, Mask: 255.255.255.0, Masked: 10.9.3.0, Route: ifdx-0
Destination: 10.9.1.0/24, Mask: 255.255.255.0, Masked: 10.9.1.0, Route: ifdx-0
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
ARP Table:
IP addr: 10.9.3.1, dst_hwaddr: 52:54:00:2E:62:A2
IP addr: 10.9.3.6, dst_hwaddr: 52:54:00:2E:62:A2
IP addr: 10.9.1.2, dst_hwaddr: 00:0A:F7:7C:57:E1
---------------------------------------------------------------------------------
Initializing port 0... PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee7300 hw_ring=0x7fea80721880 dma_addr=0x35b21880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee6e00 hw_ring=0x7fea80731880 dma_addr=0x35b31880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee6900 hw_ring=0x7fea80741880 dma_addr=0x35b41880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee6400 hw_ring=0x7fea80751880 dma_addr=0x35b51880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee5f00 hw_ring=0x7fea80761880 dma_addr=0x35b61880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee5a00 hw_ring=0x7fea80771880 dma_addr=0x35b71880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee5500 hw_ring=0x7fea80781880 dma_addr=0x35b81880
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fea79ee5000 hw_ring=0x7fea80791880 dma_addr=0x35b91880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee4700 hw_ring=0x7fea807a1880 dma_addr=0x35ba1880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee3e00 hw_ring=0x7fea807b1880 dma_addr=0x35bb1880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee3500 hw_ring=0x7fea807c1880 dma_addr=0x35bc1880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee2c00 hw_ring=0x7fea807d1880 dma_addr=0x35bd1880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee2300 hw_ring=0x7fea807e1880 dma_addr=0x35be1880
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee1a00 hw_ring=0x7fea79000000 dma_addr=0x7ce400000
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee1100 hw_ring=0x7fea79010000 dma_addr=0x7ce410000
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fea79ee0800 hw_ring=0x7fea79020000 dma_addr=0x7ce420000
PMD: eth_igb_start(): <<
PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported
done:
Checking link status.....................................done
Port 0 Link Up - speed 1000 Mbps - full-duplex
Configuration updated by mtcp_setconf().
Configurations:
Number of CPU cores available: 8
Number of CPU cores to use: 8
Maximum number of concurrency per core: 937500
Number of source ip to use: 64
Maximum number of preallocated buffers per core: 937500
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0
---------------------------------------------------------------------------------
CPU 3: initialization finished.
[mtcp_create_context:1174] CPU 3 is in charge of printing stats.
[CPU 3] dpdk0 flows: 0, RX: 64(pps) (err: 0), 0.00(Gbps), TX: 64(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 0, RX: 64(pps) (err: 0), 0.00(Gbps), TX: 64(pps), 0.00(Gbps)
CPU 5: initialization finished.
CPU 2: initialization finished.
CPU 4: initialization finished.
CPU 6: initialization finished.
CPU 7: initialization finished.
CPU 0: initialization finished.
CPU 1: initialization finished.
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
Thread 3 handles 2000000 flows. connecting to 10.9.3.6:80
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
Thread 4 handles 2000000 flows. connecting to 10.9.3.6:80
Thread 5 handles 2000000 flows. connecting to 10.9.3.6:80
[CPU 0] dpdk0 flows: 0, RX: 5152(pps) (err: 0), 0.01(Gbps), TX: 5094(pps), 0.00(Gbps)
[CPU 1] dpdk0 flows: 0, RX: 4563(pps) (err: 0), 0.01(Gbps), TX: 4563(pps), 0.00(Gbps)
[CPU 2] dpdk0 flows: 0, RX: 4855(pps) (err: 0), 0.01(Gbps), TX: 4855(pps), 0.00(Gbps)
[CPU 3] dpdk0 flows: 28000, RX: 4716(pps) (err: 0), 0.01(Gbps), TX: 9975(pps), 0.01(Gbps)
[CPU 4] dpdk0 flows: 13579, RX: 5357(pps) (err: 0), 0.01(Gbps), TX: 5347(pps), 0.00(Gbps)
[CPU 5] dpdk0 flows: 12891, RX: 4985(pps) (err: 0), 0.01(Gbps), TX: 4946(pps), 0.00(Gbps)
[CPU 6] dpdk0 flows: 0, RX: 4612(pps) (err: 0), 0.01(Gbps), TX: 4612(pps), 0.00(Gbps)
[CPU 7] dpdk0 flows: 0, RX: 5060(pps) (err: 0), 0.01(Gbps), TX: 5060(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 54470, RX: 39300(pps) (err: 0), 0.06(Gbps), TX: 44452(pps), 0.03(Gbps)
[WARINING] Available # addresses (516087) is smaller than the max concurrency (937500).
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
[WARINING] Available # addresses (516088) is smaller than the max concurrency (937500).
Thread 0 handles 2000000 flows. connecting to 10.9.3.6:80
Thread 1 handles 2000000 flows. connecting to 10.9.3.6:80
Thread 6 handles 2000000 flows. connecting to 10.9.3.6:80
Thread 2 handles 2000000 flows. connecting to 10.9.3.6:80
Thread 7 handles 2000000 flows. connecting to 10.9.3.6:80
[CPU 0] dpdk0 flows: 312500, RX: 9719(pps) (err: 0), 0.01(Gbps), TX: 117560(pps), 0.09(Gbps)
[CPU 1] dpdk0 flows: 303921, RX: 8151(pps) (err: 0), 0.01(Gbps), TX: 110036(pps), 0.09(Gbps)
[CPU 2] dpdk0 flows: 312500, RX: 9740(pps) (err: 0), 0.01(Gbps), TX: 120092(pps), 0.09(Gbps)
[CPU 3] dpdk0 flows: 312500, RX: 12824(pps) (err: 0), 0.01(Gbps), TX: 146624(pps), 0.11(Gbps)
[CPU 4] dpdk0 flows: 312500, RX: 10267(pps) (err: 0), 0.01(Gbps), TX: 133376(pps), 0.10(Gbps)
[CPU 5] dpdk0 flows: 312500, RX: 10808(pps) (err: 0), 0.01(Gbps), TX: 128448(pps), 0.10(Gbps)
[CPU 6] dpdk0 flows: 307636, RX: 8108(pps) (err: 0), 0.01(Gbps), TX: 113976(pps), 0.09(Gbps)
[CPU 7] dpdk0 flows: 312500, RX: 9692(pps) (err: 0), 0.01(Gbps), TX: 111934(pps), 0.09(Gbps)
[ ALL ] dpdk0 flows: 2486557, RX: 79309(pps) (err: 0), 0.08(Gbps), TX: 982046(pps), 0.77(Gbps)
[ ALL ] connect: 2497968, read: 0 MB, write: 1 MB, completes: 0 (resp_time avg: 0, max: 0 us)
[ ALL ] connect: 2032, read: 0 MB, write: 0 MB, completes: 0 (resp_time avg: 0, max: 0 us)
[CPU 0] dpdk0 flows: 312500, RX: 8800(pps) (err: 0), 0.01(Gbps), TX: 149696(pps), 0.12(Gbps)
[CPU 1] dpdk0 flows: 312500, RX: 7700(pps) (err: 0), 0.01(Gbps), TX: 148736(pps), 0.12(Gbps)
[CPU 2] dpdk0 flows: 312500, RX: 7681(pps) (err: 0), 0.01(Gbps), TX: 150016(pps), 0.12(Gbps)
[CPU 3] dpdk0 flows: 312500, RX: 7654(pps) (err: 0), 0.01(Gbps), TX: 150016(pps), 0.12(Gbps)
[CPU 4] dpdk0 flows: 312500, RX: 7709(pps) (err: 0), 0.01(Gbps), TX: 150016(pps), 0.12(Gbps)
[CPU 5] dpdk0 flows: 312500, RX: 7595(pps) (err: 0), 0.01(Gbps), TX: 150016(pps), 0.12(Gbps)
[CPU 6] dpdk0 flows: 312500, RX: 7548(pps) (err: 0), 0.01(Gbps), TX: 149248(pps), 0.12(Gbps)
[CPU 7] dpdk0 flows: 312500, RX: 8007(pps) (err: 0), 0.01(Gbps), TX: 150016(pps), 0.12(Gbps)
[ ALL ] dpdk0 flows: 2500000, RX: 62694(pps) (err: 0), 0.07(Gbps), TX: 1197760(pps), 0.94(Gbps)
[ ALL ] connect: 0, read: 0 MB, write: 1 MB, completes: 0 (resp_time avg: 0, max: 0 us)
[CPU 0] dpdk0 flows: 312500, RX: 8640(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 1] dpdk0 flows: 312500, RX: 6472(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 2] dpdk0 flows: 312500, RX: 6130(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 3] dpdk0 flows: 312500, RX: 6974(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 4] dpdk0 flows: 312500, RX: 7821(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 5] dpdk0 flows: 312500, RX: 6535(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 6] dpdk0 flows: 312500, RX: 7593(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[CPU 7] dpdk0 flows: 312500, RX: 7416(pps) (err: 0), 0.01(Gbps), TX: 149440(pps), 0.12(Gbps)
[ ALL ] dpdk0 flows: 2500000, RX: 57581(pps) (err: 0), 0.06(Gbps), TX: 1195520(pps), 0.93(Gbps) <===============2.5 million concurrent connection
[ ALL ] connect: 0, read: 0 MB, write: 1 MB, completes: 0 (resp_time avg: 0, max: 0 us)

------------------------------------------------------------------
Ltm::Virtual Server: vs_http
------------------------------------------------------------------
Status
Availability : unknown
State : enabled
Reason : The children pool member(s) either don't have service checking enabled, or service check results are not available yet
CMP : enabled
CMP Mode : all-cpus
Destination : 10.9.3.6:80
Traffic ClientSide Ephemeral General
Bits In 56.4G 0 -
Bits Out 95.0G 0 -
Packets In 92.7M 0 -
Packets Out 111.5M 0 -
Current Connections 1.0M 0 - <=================1 million concurrent connection, can be higher since the VE can't handle such concurrency rate.
Maximum Connections 1.2M 0 -
Total Connections 8.2M 0 -
Evicted Connections 0 0 -
Slow Connections Killed 0 0 -
Min Conn Duration/msec - - 156
Max Conn Duration/msec - - 2.4M
Mean Conn Duration/msec - - 609.8K
Total Requests - - 0

Example SSL ClientHello load test:

root@pktgen:/usr/src/mtcp# LD_LIBRARY_PATH=.:/usr/src/mtcp/dpdk/lib LD_PRELOAD=$* ./apps/ssl-dos/brute-shake 10.9.3.6 1600000 -N 8 -c 250000
Application configuration:
Host: 10.9.3.6
# of total_flows: 1600000
# of cores: 8
Concurrency: 250000
---------------------------------------------------------------------------------
Loading mtcp configuration from : mtcp-brute-shake.conf
Loading interface setting

[CPU 0] dpdk0 flows: 31250, RX: 11485(pps) (err: 0), 0.01(Gbps), TX: 71028(pps), 0.09(Gbps)
[CPU 1] dpdk0 flows: 31250, RX: 10897(pps) (err: 0), 0.01(Gbps), TX: 72320(pps), 0.10(Gbps)
[CPU 2] dpdk0 flows: 31250, RX: 3960(pps) (err: 0), 0.00(Gbps), TX: 54759(pps), 0.06(Gbps)
[CPU 3] dpdk0 flows: 31250, RX: 3413(pps) (err: 0), 0.00(Gbps), TX: 54034(pps), 0.06(Gbps)
[CPU 4] dpdk0 flows: 31250, RX: 9698(pps) (err: 0), 0.01(Gbps), TX: 67136(pps), 0.08(Gbps)
[CPU 5] dpdk0 flows: 31253, RX: 16285(pps) (err: 0), 0.01(Gbps), TX: 79912(pps), 0.15(Gbps)
[CPU 6] dpdk0 flows: 31250, RX: 11328(pps) (err: 0), 0.01(Gbps), TX: 70734(pps), 0.09(Gbps)
[CPU 7] dpdk0 flows: 31250, RX: 11252(pps) (err: 0), 0.01(Gbps), TX: 73762(pps), 0.11(Gbps)
[ ALL ] dpdk0 flows: 250003, RX: 78318(pps) (err: 0), 0.06(Gbps), TX: 543685(pps), 0.74(Gbps)
[ ALL ] connect: 71, read: 0 MB, write: 67 MB, completes: 71 (resp_time avg: 107258, max: 1102163 us)
[CPU 0] dpdk0 flows: 31250, RX: 9615(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.14(Gbps)
[CPU 1] dpdk0 flows: 31250, RX: 10655(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.15(Gbps)
[CPU 2] dpdk0 flows: 31250, RX: 8994(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.06(Gbps)
[CPU 3] dpdk0 flows: 31250, RX: 8951(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.05(Gbps)
[CPU 4] dpdk0 flows: 31250, RX: 9472(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.12(Gbps)
[CPU 5] dpdk0 flows: 31421, RX: 8307(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.18(Gbps)
[CPU 6] dpdk0 flows: 31250, RX: 9430(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.14(Gbps)
[CPU 7] dpdk0 flows: 31250, RX: 10946(pps) (err: 0), 0.01(Gbps), TX: 54656(pps), 0.16(Gbps)
[ ALL ] dpdk0 flows: 250171, RX: 76370(pps) (err: 0), 0.06(Gbps), TX: 437248(pps), 1.00(Gbps)
[CPU 0] dpdk0 flows: 31250, RX: 3341(pps) (err: 0), 0.00(Gbps), TX: 42816(pps), 0.16(Gbps)
[CPU 1] dpdk0 flows: 31250, RX: 3989(pps) (err: 0), 0.00(Gbps), TX: 42816(pps), 0.16(Gbps)
[CPU 2] dpdk0 flows: 31250, RX: 9972(pps) (err: 0), 0.01(Gbps), TX: 42816(pps), 0.06(Gbps)
[CPU 3] dpdk0 flows: 31250, RX: 10004(pps) (err: 0), 0.01(Gbps), TX: 42816(pps), 0.06(Gbps)
[CPU 4] dpdk0 flows: 31250, RX: 4438(pps) (err: 0), 0.00(Gbps), TX: 42816(pps), 0.13(Gbps)
[CPU 5] dpdk0 flows: 31572, RX: 5062(pps) (err: 0), 0.01(Gbps), TX: 42816(pps), 0.13(Gbps)
[CPU 6] dpdk0 flows: 31250, RX: 3305(pps) (err: 0), 0.00(Gbps), TX: 42816(pps), 0.15(Gbps)
[CPU 7] dpdk0 flows: 31250, RX: 4161(pps) (err: 0), 0.00(Gbps), TX: 42816(pps), 0.15(Gbps)
[ ALL ] dpdk0 flows: 250322, RX: 44272(pps) (err: 0), 0.04(Gbps), TX: 342528(pps), 1.00(Gbps)
[ ALL ] connect: 259, read: 0 MB, write: 40 MB, completes: 259 (resp_time avg: 230897, max: 2726227 us)
[ ALL ] connect: 61, read: 0 MB, write: 8 MB, completes: 61 (resp_time avg: 364367, max: 3109163 us)

VE CPU is high due to SSL stress test
top - 11:50:44 up 4 days, 20:52, 2 users, load average: 0.26, 0.18, 0.06
Tasks: 373 total, 6 running, 367 sleeping, 0 stopped, 0 zombie
Cpu0 : 55.1%us, 1.2%sy, 0.0%ni, 1.4%id, 0.0%wa, 0.3%hi, 41.7%si, 0.3%st
Cpu1 : 91.1%us, 4.0%sy, 0.0%ni, 4.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st
Cpu2 : 89.0%us, 5.5%sy, 0.0%ni, 5.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st
Cpu3 : 90.7%us, 4.7%sy, 0.0%ni, 4.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st
Mem: 14403124k total, 13995684k used, 407440k free, 112888k buffers
Swap: 1048568k total, 441580k used, 606988k free, 358956k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
10025 root RT 0 11.9g 130m 104m R 96.6 0.9 113:16.67 0 tmm.0 -T 4 --tmid 0 --npus 4 --platform Z100 -m -s 12088
10034 root RT 0 11.9g 130m 104m R 93.5 0.9 90:49.78 3 tmm.0 -T 4 --tmid 0 --npus 4 --platform Z100 -m -s 12088
10032 root RT 0 11.9g 130m 104m R 93.2 0.9 112:50.29 1 tmm.0 -T 4 --tmid 0 --npus 4 --platform Z100 -m -s 12088
10033 root RT 0 11.9g 130m 104m R 93.2 0.9 90:21.94 2 tmm.0 -T 4 --tmid 0 --npus 4 --platform Z100 -m -s 12088

Example of SYN flooding in line rate
root@pktgen:/usr/src/MoonGen# build/MoonGen examples/l3-tcp-syn-flood.lua 0 10.0.0.1 16000000 10000

PMD: To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
Port 0 (A0:36:9F:A1:4D:6C) is up: full-duplex 1000 MBit/s
INFO: Detected an IPv4 address.
18:56:23.699493 ETH a0:36:9f:a1:4d:6d > 52:54:00:2e:62:a2 type 0x0800 (IP4)
IP4 10.0.0.1 > 10.9.3.6 ver 4 ihl 5 tos 0 len 46 id 0 flags 0 frag 0 ttl 64 proto 0x06 (TCP) cksum 0x0000
TCP 1025 > 443 seq# 1 ack# 0 offset 0x5 reserved 0x00 flags 0x02 [X|X|X|X|SYN|X] win 10 cksum 0x0000 urg 0
0x0000: 5254 002e 62a2 a036 9fa1 4d6d 0800 4500
0x0010: 002e 0000 0000 4006 0000 0a00 0001 0a09
0x0020: 0306 0401 01bb 0000 0001 0000 0000 5002
0x0030: 000a 0000 0000 0000 0000 0000
18:56:23.699676 ETH a0:36:9f:a1:4d:6d > 52:54:00:2e:62:a2 type 0x0800 (IP4)
IP4 10.0.0.2 > 10.9.3.6 ver 4 ihl 5 tos 0 len 46 id 0 flags 0 frag 0 ttl 64 proto 0x06 (TCP) cksum 0x0000
TCP 1025 > 443 seq# 1 ack# 0 offset 0x5 reserved 0x00 flags 0x02 [X|X|X|X|SYN|X] win 10 cksum 0x0000 urg 0
0x0000: 5254 002e 62a2 a036 9fa1 4d6d 0800 4500
0x0010: 002e 0000 0000 4006 0000 0a00 0002 0a09
0x0020: 0306 0401 01bb 0000 0001 0000 0000 5002
0x0030: 000a 0000 0000 0000 0000 0000
18:56:23.699797 ETH a0:36:9f:a1:4d:6d > 52:54:00:2e:62:a2 type 0x0800 (IP4)
IP4 10.0.0.3 > 10.9.3.6 ver 4 ihl 5 tos 0 len 46 id 0 flags 0 frag 0 ttl 64 proto 0x06 (TCP) cksum 0x0000
TCP 1025 > 443 seq# 1 ack# 0 offset 0x5 reserved 0x00 flags 0x02 [X|X|X|X|SYN|X] win 10 cksum 0x0000 urg 0
0x0000: 5254 002e 62a2 a036 9fa1 4d6d 0800 4500
0x0010: 002e 0000 0000 4006 0000 0a00 0003 0a09
0x0020: 0306 0401 01bb 0000 0001 0000 0000 5002
0x0030: 000a 0000 0000 0000 0000 0000
[Device: id=0] Sent 1481340 packets, current rate 1.48 Mpps, 758.39 MBit/s, 995.39 MBit/s wire rate. <======================
[Device: id=0] Sent 2963453 packets, current rate 1.48 Mpps, 758.81 MBit/s, 995.94 MBit/s wire rate.
[Device: id=0] Sent 4446076 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.
[Device: id=0] Sent 5928571 packets, current rate 1.48 Mpps, 759.02 MBit/s, 996.21 MBit/s wire rate.
[Device: id=0] Sent 7411068 packets, current rate 1.48 Mpps, 759.01 MBit/s, 996.20 MBit/s wire rate.
[Device: id=0] Sent 8893308 packets, current rate 1.48 Mpps, 758.90 MBit/s, 996.06 MBit/s wire rate.
[Device: id=0] Sent 10375803 packets, current rate 1.48 Mpps, 758.98 MBit/s, 996.17 MBit/s wire rate.
[Device: id=0] Sent 11857787 packets, current rate 1.48 Mpps, 758.73 MBit/s, 995.83 MBit/s wire rate.
[Device: id=0] Sent 13340283 packets, current rate 1.48 Mpps, 759.00 MBit/s, 996.19 MBit/s wire rate.
[Device: id=0] Sent 14822549 packets, current rate 1.48 Mpps, 758.86 MBit/s, 996.01 MBit/s wire rate.
[Device: id=0] Sent 16304759 packets, current rate 1.48 Mpps, 758.83 MBit/s, 995.97 MBit/s wire rate.
[Device: id=0] Sent 17787384 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.
[Device: id=0] Sent 19270007 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.
[Device: id=0] Sent 20751607 packets, current rate 1.48 Mpps, 758.52 MBit/s, 995.56 MBit/s wire rate.
[Device: id=0] Sent 22233975 packets, current rate 1.48 Mpps, 758.97 MBit/s, 996.15 MBit/s wire rate.
[Device: id=0] Sent 23716600 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.
[Device: id=0] Sent 25199223 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.
[Device: id=0] Sent 26681591 packets, current rate 1.48 Mpps, 758.93 MBit/s, 996.09 MBit/s wire rate.
[Device: id=0] Sent 28164087 packets, current rate 1.48 Mpps, 759.02 MBit/s, 996.22 MBit/s wire rate.
[Device: id=0] Sent 29646200 packets, current rate 1.48 Mpps, 758.82 MBit/s, 995.95 MBit/s wire rate.
[Device: id=0] Sent 31128696 packets, current rate 1.48 Mpps, 759.03 MBit/s, 996.22 MBit/s wire rate.
[Device: id=0] Sent 32611319 packets, current rate 1.48 Mpps, 759.09 MBit/s, 996.31 MBit/s wire rate.

Example using mTCP ported multipthread apachebench for http/https load testing
(apachebench is slightly more complex than epwget and requires bigger memory footprint, not ideal for million concurrency connection if given hardware has limited memory, but it has more features...)
root@pktgen:/usr/src/mtcp/apps/apache_benchmark_deprecated/support# LD_LIBRARY_PATH=.:/usr/src/mtcp/dpdk/lib:/usr/local/lib LD_PRELOAD=$* .libs/ab -n 1000000 -c 80000 -N 8 -L 64 10.9.3.6/

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
---------------------------------------------------------------------------------
Loading mtcp configuration from : /etc/mtcp/config/mtcp.conf
Loading interface setting
..............CUT........

[CPU 0] dpdk0 flows: 10000, RX: 18370(pps) (err: 0), 0.02(Gbps), TX: 37765(pps), 0.03(Gbps)
[CPU 1] dpdk0 flows: 10012, RX: 32126(pps) (err: 0), 0.03(Gbps), TX: 51169(pps), 0.04(Gbps)
[CPU 2] dpdk0 flows: 10004, RX: 26160(pps) (err: 0), 0.02(Gbps), TX: 45429(pps), 0.04(Gbps)
[CPU 3] dpdk0 flows: 10000, RX: 20821(pps) (err: 0), 0.02(Gbps), TX: 40208(pps), 0.03(Gbps)
[CPU 4] dpdk0 flows: 10033, RX: 22857(pps) (err: 0), 0.02(Gbps), TX: 41761(pps), 0.04(Gbps)
[CPU 5] dpdk0 flows: 10014, RX: 66358(pps) (err: 0), 0.06(Gbps), TX: 86359(pps), 0.07(Gbps)
[CPU 6] dpdk0 flows: 10000, RX: 24649(pps) (err: 0), 0.02(Gbps), TX: 44590(pps), 0.04(Gbps)
[CPU 7] dpdk0 flows: 10004, RX: 23713(pps) (err: 0), 0.02(Gbps), TX: 45680(pps), 0.04(Gbps)
[ ALL ] dpdk0 flows: 80067, RX: 235054(pps) (err: 0), 0.22(Gbps), TX: 392961(pps), 0.34(Gbps)
[CPU 0] dpdk0 flows: 10000, RX: 31844(pps) (err: 0), 0.03(Gbps), TX: 36374(pps), 0.03(Gbps)
[CPU 1] dpdk0 flows: 10054, RX: 36440(pps) (err: 0), 0.03(Gbps), TX: 44083(pps), 0.04(Gbps)
[CPU 2] dpdk0 flows: 10003, RX: 40730(pps) (err: 0), 0.04(Gbps), TX: 49276(pps), 0.04(Gbps)
[CPU 3] dpdk0 flows: 10000, RX: 28863(pps) (err: 0), 0.03(Gbps), TX: 35395(pps), 0.03(Gbps)
[CPU 4] dpdk0 flows: 10001, RX: 31824(pps) (err: 0), 0.03(Gbps), TX: 39832(pps), 0.04(Gbps)
[CPU 5] dpdk0 flows: 10048, RX: 61479(pps) (err: 0), 0.05(Gbps), TX: 64870(pps), 0.05(Gbps)
[CPU 6] dpdk0 flows: 10007, RX: 38790(pps) (err: 0), 0.04(Gbps), TX: 41937(pps), 0.03(Gbps)
[CPU 7] dpdk0 flows: 10006, RX: 39920(pps) (err: 0), 0.04(Gbps), TX: 41469(pps), 0.03(Gbps)
[ ALL ] dpdk0 flows: 80118, RX: 309890(pps) (err: 0), 0.29(Gbps), TX: 353236(pps), 0.30(Gbps)

root@pktgen:/usr/src/mtcp/apps/apache_benchmark_deprecated/support# LD_LIBRARY_PATH=.:/usr/src/mtcp/dpdk/lib:/usr/local/lib LD_PRELOAD=$* .libs/ab -n 1000000 -c 10000 -N 8 -L 64 https://10.9.3.6/


This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
---------------------------------------------------------------------------------
Loading mtcp configuration from : /etc/mtcp/config/mtcp.conf
Loading interface setting
....................CUT...............
[CPU 0] dpdk0 flows: 1000, RX: 2457(pps) (err: 0), 0.00(Gbps), TX: 3372(pps), 0.00(Gbps)
[CPU 1] dpdk0 flows: 1000, RX: 3159(pps) (err: 0), 0.01(Gbps), TX: 3759(pps), 0.01(Gbps)
[CPU 2] dpdk0 flows: 1000, RX: 2000(pps) (err: 0), 0.00(Gbps), TX: 3000(pps), 0.00(Gbps)
[CPU 3] dpdk0 flows: 1000, RX: 2000(pps) (err: 0), 0.00(Gbps), TX: 3000(pps), 0.00(Gbps)
[CPU 4] dpdk0 flows: 1000, RX: 2294(pps) (err: 0), 0.00(Gbps), TX: 3257(pps), 0.00(Gbps)
[CPU 5] dpdk0 flows: 1000, RX: 2264(pps) (err: 0), 0.00(Gbps), TX: 3229(pps), 0.00(Gbps)
[CPU 6] dpdk0 flows: 1000, RX: 2909(pps) (err: 0), 0.01(Gbps), TX: 2579(pps), 0.00(Gbps)
[CPU 7] dpdk0 flows: 1000, RX: 2286(pps) (err: 0), 0.00(Gbps), TX: 3234(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 8000, RX: 19369(pps) (err: 0), 0.04(Gbps), TX: 25430(pps), 0.04(Gbps)
[CPU 0] dpdk0 flows: 1000, RX: 871(pps) (err: 0), 0.01(Gbps), TX: 712(pps), 0.00(Gbps)
[CPU 1] dpdk0 flows: 1000, RX: 895(pps) (err: 0), 0.01(Gbps), TX: 713(pps), 0.00(Gbps)
[CPU 2] dpdk0 flows: 1000, RX: 606(pps) (err: 0), 0.00(Gbps), TX: 512(pps), 0.00(Gbps)
[CPU 3] dpdk0 flows: 1000, RX: 662(pps) (err: 0), 0.00(Gbps), TX: 560(pps), 0.00(Gbps)
[CPU 4] dpdk0 flows: 1000, RX: 891(pps) (err: 0), 0.01(Gbps), TX: 736(pps), 0.00(Gbps)
[CPU 5] dpdk0 flows: 1000, RX: 881(pps) (err: 0), 0.01(Gbps), TX: 713(pps), 0.00(Gbps)
[CPU 6] dpdk0 flows: 1000, RX: 153(pps) (err: 0), 0.00(Gbps), TX: 151(pps), 0.00(Gbps)
[CPU 7] dpdk0 flows: 1000, RX: 872(pps) (err: 0), 0.01(Gbps), TX: 714(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 8000, RX: 5831(pps) (err: 0), 0.03(Gbps), TX: 4811(pps), 0.01(Gbps)

Followers