Essay Writing Service

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Network Load Testing of Two Interconnected Virtual networks

Network Load Testing

Table of Contents

1.0 Introduction

2.0 Logical Network Design

3.0 SNMP Network Load Testing

3.1 Network Traffic Initiation

3.2 CPU Load Monitoring

3.3 Memory Usage Monitoring

3.4 Disk I/O Load Monitoring

4.0 SNMP Network Monitoring

4.1 SNMP IP Network Monitoring

4.2 SNMP Network Monitoring

5.0 Measurements for Quality of Service (QoS)

5.1 Ping ICMP Jitter Calculations

5.2 HTTP Streaming Calculations

5.2.1 HTTP Streaming Jitter Calculations

5.2.2 HTTP Streaming Round Trip Time (RTT) and Throughput Calculations

5.3 UDP Streaming Jitter Calculations

5.4 HTTP Siege

5.5 IPERF Network Measurements

5.5.1 TCP Measurements

5.5.2 UDP Measurements

6.0 Conclusion

1.0    Introduction

This report outlines the implementation, operation and demonstration of two interconnected virtual networks. Moreover, a series of testing methodologies have been designed and implemented in an attempt to measure simple Quality of Service (QoS) metrics. The conducted testing parameters include examinations into the average round trip time (RTT), throughput and the distribution of packet arrival times. Further examinations into the measurement of QoS metrics will be conducted under a multitude of network load conditions, in particular, controlling the source of the network load traffic.

2.0    Logical Network Design

Image 1 illustrates the applied network architecture employed to conduct the outlined investigations regarding simple QoS metrics.

Image 1: Implemented Network Infrastructure

The defined network interfaces are operational in host-only mode. The FTP and Web Server are confined to the CentOS-Server. The subsequent rules were applied to each network interface, to support the succinct communication between all network devices. Further, it should be noted that both the CentOS-Client and CentOS-Server require the connection of only one network interface, Eth0. The proceeding network interface rules were configured on the implemented on the network virtual machines.

CentOS-Client (Eth0) 

IP Address – 192.168.66.104

Subnet Mask – 255.255.255.0

Gateway – 192.168.66.101

 

 
CentOS-Gateway1 (Eth0) 

IP Address – 10.0.0.1

Subnet Mask – 255.255.255.252

Gateway – 10.0.0.2

CentOS-Gateway1 (Eth1) 

IP Address – 192.168.66.101

Subnet Mask – 255.255.255.0

 

  

CentOS-Gateway2 (Eth0)

IP Address – 10.0.0.2

Subnet Mask – 255.255.255.252

Gateway – 10.0.0.1

  

CentOS-Gateway2 (Eth1)

IP Address – 192.168.56.101

Subnet Mask – 255.255.255.0

 

  

CentOS-Server (Eth0)

IP Address – 192.168.56.104

Subnet Mask – 255.255.255.0

Gateway – 192.168.56.101

 

 

The following global setting were applied on both CentOS-Gateway1 and CentOS-Gateway2:

IP Forwarding

The /etc/sysctl.conf file was modified to enable IP forwarding from both the CentOS-Gateway1 and CentOS-Gateway2 adapting the value to net.ipv4.ip_forward=1.

 

Network Routing

The following routing rules were realised upon the network, permitting the free distribution of packets.

Gateway 1 – route add -net 192.168.56.0/24 gw 10.0.0.2 dev eth0

Gateway 2 – route add -net 192.168.66.0/24 gw 10.0.0.1 dev eth1

 

3.0    SNMP Network Load Testing

Simple Network Management Protocol (SNMP) is a universally implemented protocol enlisted to manage devices residing on a network. SNMP agents were installed and configured on all devices in the network. However, the installation of iReasoning MIB Browser was isolated to the CentOS-Client machine.

3.1      Network Traffic Initiation

ICMP Traffic

ICMP traffic was generated from the CentOS-Gateway1 to the CentOS-Server through the initiation of the following ping command:

ping 192.168.56.104

Monitoring of ICMP traffic occurred on the CentOS-Client utilising MIB Browser. Specifically, the icmpInMsgs and the icmpOutMsgs SNMP OID’s were monitored. The icmpInMsgs OID accounts for the total number of ICMP packets received by the specified host. The icmpOutMsgs records the total number of ICMP packets that a specified host attempted to send. It should be noted that both described OID’s account for all ICMP packets inclusive of ICMP error packets.

Chart 1: icmpInMsgs Monitoring with MIB Browser.

Chart 2: icmpOutMsgs Monitoring with MIB Browser.

MIB Browser and the applied OID’s were proficient in capturing all transmitted ICMP echo requests and their corresponding echo reply packets between CentOS-Gateway1 and the CentOS-Server.

ICMP Network Gateway Flood

An ICMP network gateway flood was instigated using three network devices, the CentOS-Server, CentOS-Client and CentOS-Gateway2. The CentOS-Server was used to generate the network flood on the CentOS-Gateway2 using the following command:

GW=’route -n | grep UG | awk ‘{print $2}’’; ping -fq $GW >/dev/null 2>&1 &

The result of the command was monitored on the CentOS-Client utilising MIB Browser. The icmpInMsgs OID was initiated to display the generated activity on the CentOS-Gateway2. Chart 3 depicted below, indicates an increase in CentOS-Gateway2 ICMP activity following the execution of this command on the CentOS-Server.

Chart 3: icmpInMsgs Gateway Flood Monitoring with MIB Browser.

MIB Browser, specifically the icmpInMsgs OID was capable of capturing and graphically representing the network gateway flood, portraying a significant increase in ICMP traffic.

 

 

TCP and UDP Traffic

Collectively, TCP and UDP connections were established between the CentOS-Client and the CentOS Server. Subsequent monitoring of these connections also occurred on the CentOS-Client machine, making use of MIB Browser. Both TCP and UDP connections were established using Netcat on the CentOS-Client machine.

TCP Connections – ncat 192.168.56.104 443

UDP Connections – ncat -u 192.168.56.104 443

With regard to TCP connections MIB Browser was configured on the CentOS-Client machine to monitor the CentOS-Server using the tcpInSegs OID. The tcpInSegs OID identifies the total number of transmitted segments inclusive of error segments that are transmitted on an open TCP connection. Chart 4 depicts the establishment, termination and reestablishment of TCP connections between the CentOS-Client and the CentOS-Server. Peaks in the graph represent a newly established TCP connection.

Chart 4: tcpInSegs Monitoring with MIB Browser.

Similarly, MIB Browser was configured on the CentOS-Client machine to monitor UDP connections applying the udpInDatagrams OID. The udpInDatagrams OID identifies the total number of UDP datagrams transmitted to UDP users in the specified host. Chart 5 depicts the opening, closing and reopening of UDP connections between the CentOS-Client and the CentOS-Server machines. Each peak represents the initiation of a new UDP connection.

Chart 5: udpInDatagrams Monitoring with MIB Browser.

3.2      CPU Load Monitoring

Test 1

CPU load monitoring tests were conducted on the CentOS-Server. Test outcome was monitored using MIB Browser on the CentOS-Client machine. CPU load was generated by running the following command on the CentOS-Server:

cat/dev/zero > /dev/null

Monitoring on the CentOS-Client machine was conducted using the MIB Browser OID hrProcessorLoad. The hrProccessorLoad examines the average percentage of time, in minutes, that a specified processor was active.

Chart 6: CPU Load Monitoring with MIB Browser.

Test 2

A secondary test was conducted to generate and monitor CPU load. CPU load was created by, forcing the maximisation of one CPU for 60 seconds on the CentOS-Server machine. This test was conducted using the following command:

stress –cpu 1 –timeout 60.

MIB Browser on the CentOS-Client was used to monitor CPU load. Chart 7 outlines the implementation. The peaks in the chart represent the re-initiation of CPU load whilst the troughs indicate the termination of CPU load on the CentOS-Server.

Chart 7: CPU Stress Test Load Monitoring with MIB Browser.

3.3      Memory Usage Monitoring

Memory monitoring was conducted by employing two differing methodologies, a Bash Shell script and MIB Browser. Initial memory load of the CentOS-Server machine was identified using the uptime command producing the following output:

Image 2: Initial Resource Recordings of CentOS-Server Machine.

Command output depicts there is limited activity on the CentOS-Server machine prior to memory load testing. In both instances’ memory load was initiated with the following command:

stress –vm 1 –vm-bytes 256m –timeout 60

In the case of MIB Browser monitoring this command was activated and deactivated continuously to produce results depicting the spikes in memory load when the command was executed.  Once again, this activity was monitored on the CentOS-Client machine, using the hrProcessorLoad OID.

Chart 8: Memory Stress Test Load Monitoring with MIB Browser.

Memory load monitoring was also conducted locally on the CentOS-Server machine with the use of a Bash Shell script depicted below.

Image 3: Bash Shell Script for Local Monitoring.

Script execution enabled the local monitoring of memory load on the CentOS-Server machine. Continually updated statistics were outputted regarding the percentage of overall resources being consumed. It is evident in Image 4 that the execution of a memory Stress Test on the the CentOS-Server generated significant consumption of both memory and disk resources.

Image 4: Local Load Monitoring of CentOS-Server.

3.4      Disk I/O Load Monitoring

Disk I/O load was produced on the CentOS-Server machine with the following command:

while true; do ls -alZR / >/dev/null 2>&1; done

The disk I/O load was monitored on the CentOS-Client machine using MIB Browser. Specifically, the hrProcessorLoad OID. The disk I/O load was instigated and terminated over a period of time, it effect is registered in Chart 9.

Chart 9: Disk I/O Load Monitoring with MIB Browser.

4.0     SNMP Network Monitoring

Monitoring of network devices was applied with MIB Browser on the CentOS-Client. The CentOS-Server and the CentOS-Gateway2 devices were the monitored devices. The CentOS-Gateway1 was not monitored as the configuration and architecture is parallel to that of the CentOS-Gateway2. Thus, the capabilities of the CentOS-Gateway1 are presumed to be the same as that of CentOS-Gateway2.

4.1      SNMP IP Network Monitoring

Test 1

SNMP IP observations were taken on the CentOS-Gateway2. The CentOS-Gateway2 was monitored for two forms of IP transmissions. First IP transmission requests were monitored with MIB Browser OID ipOutRequests. The ipOutRequests OID records the total amount of IPv4 packets inclusive of ICMP packets supplied to an IPv4 for further transmission along the network. Chart 10 portrays transmitted ICMP packets across the IPv4 configured network.

Chart 10: CentOS-Gateway2 with MIB Browser using ipOutRequests.

 

Test 2

MIB Browser was subsequently employed for the second monitoring test on the CentOS-Gateway2. The CentOS-Gateway2 was monitored for both functionality and ability to forward packets to their designated address within the network. This test was conducted using the ipForwDatagrams in MIB Browser. ipForwDatagrams OID captures and accounts for the total number of input datagrams that traverse the CentOS-Gateway2, resulting in an attempt to identify and forward the ICMP packets to their final network destination.

Chart 11: CentOS-Gateway2 with MIB Browser using ipForwDatagrams.

Chart 11 indicates the forwarded ICMP packets, traversing the CentOS-Gateway2 prior to reaching their defined destination, the CentOS-Server. ICMP packets were generated on the CentOS-Gateway1 utilising the command:

ping 192.168.56.104

It is evident from Chart 11 that the CentOS-Gateway2 is functional in its operation as both a network gateway and a packet forwarder. The steady increase of data in Chart 11, represents the traversal of ICMP packets to their defined destination.

 

Test 3

Dissimilar to the previously executed SNMP IP network monitoring tests. This examination was conducted with reference to the CentOS-Server as opposed to the CentOS-Gateway2. Monitoring was conducted on the CentOS-Client using MIB Browser with the ipInRecieves OID. The ipInRecieves OID calculates and represents the total number of input datagrams received from all CentOS-Server interfaces inclusive of those received in error. ICMP network traffic was generated from the CentOS-Gateway1 device implementing the following command:

ping 192.168.56.104

Chart 12: CentOS-Server with MIB Browser using ipInRecieves.

Chart 12 represents the receival of ICMP packets by the CentOS-Server from all network interfaces. Whilst ICMP packets were used in this testing environment, any formulated network packets can be executed in this test. Thus, conducting the ipInRecieves monitoring permits examination into network devices ensuring the receival of all specified packets.

4.2      SNMP Network Monitoring

SNMP functionality operation were monitored with the assistance of the CentOS-Server. Examination of implemented tests were conducted on the CentOS-Client utilising MIB Browser.

 

Test 1

The first conducted SNMP monitoring test conducted on the CentOS-Server was executed using the snmpInPkts OID within MIB Browser. This OID accounts for the total number of transmitted messages between the CentOS-Server and the CentOS-Client retaining the SNMP entity.

Chart 13: CentOS-Server with MIB Browser using snmpInPkts.

The above graph represents the total number of SNMP message delivered to the CentOS-Client SNMP entity from the CentOS-Server. This instance required no activation of traffic across the network. All communication between the CentOS-Server and the CentOS-Client SNMP entity occurred as a result of their corresponding SNMP configurations. There is a gradual increase in the number of SNMP message transmissions, ensuing a continuous and open communication channel between the CentOS-Server and the CentOS-Client SNMP entity.

Test 2

Test two regarding SNMP monitoring of the network was again conducted upon the CentOS-Server. MIB Browser specifically the snmpOutPkts OID was enlisted on the CentOS-Client in order to conduct this test. The snmpOutPkts OID accumulates the total number of SNMP messages transmitted from the CentOS-Client SNMP entity to the CentOS-Server.  Test 1 and Test 2 were conducted simultaneously to determine the nature of SNMP communication in a bi-directional manner.

Chart 14: CentOS-Server with MIB Browser using snmpOutPkts.

The conducted test, similar to Test 1, represents the steady transmission of SNMP messages between the CentOS-Client entity and the CentOS-Server. Once again, this test required no generation of network traffic, relying on the configuration of SNMP agents in the network devices to communicate and report the operation of SNMP within the network.

5.0     Measurements for Quality of Service (QoS)

5.1      Ping ICMP Jitter Calculations

ICMP jitter calculations were carried out across the network over two opposing circumstances. Both measurements were taken on the CentOS-Server machine. The following command was executed to generate, obtain and output sufficient ICMP data to a text file:

ping -n -c 1200 -I 0.2 -s 72 192.168.56.104 > ping.txt

The output of the above command was written into a new text file.

Image 5: Output of Command Execution for ICMP packets.

Following this, the outputted text file was executed in the below Bash Shell script, in order to extract and determine the frequency of ICMP packets round trip times from the CentOS-Client to the CentOS-Server.

Image 6: Frequency Distribution Bash Shell Script for ICMP Packet Round Trip Times.

The output of the Fdist Bash Shell script constructed an fdits file with the frequency of all ICMP packet round trip times, in ascending order. The fdist file created was utilised to graph the extracted data. This occurred with the execution of the fdist file against a further file, containing gnuplot commands, subsequently plotting the frequency distribution of ICMP packet round trip times between the CentOS-Client and the CentOS-Server.

Image 7: Gnuplot Command File

The initial ICMP jitter calculation was conducted in an isolated environment, whereby no other network traffic was occurring. Utilising the presented methodology, the below chart represents the frequency distribution of ICMP packets with a single source of traffic between the CentOS-Client and the CentOS-Server.

Chart 15: Frequency Distribution of Ping Times with No Background Traffic.

The second calculation was conducted simultaneously with a secondary source of ICMP traffic being transmitted to the CentOS-Gateway1 machine. Again, the presented methodology was proficient in representing the frequency distribution of ICMP packets with a second source of ICMP background traffic occurring.

Chart 16: Frequency Distribution of Ping Times with Background ICMP Traffic.

The frequency distributions presented above, for both single source and dual source traffic pose some significant differentiations. The disbursement of captured ICMP packet times for dual source traffic is greater than that of the single source traffic examination. The requirement of the network to transmit multiple sources of network traffic to the CentOS-Server increases the processing delay of the measured ICMP packets in the second frequency distribution measurement.

5.2      HTTP Streaming Calculations

5.2.1        HTTP Streaming Jitter Calculations

HTTP traffic was generated across the network utilising VLC’s HTTP video streaming functionality. The CentOS-Server was configured to act as a streaming server transmitting the HTTP stream across the two network gateways before the CentOS-Client connected to the network stream. Jitter calculations were obtained using Wireshark on the CentOS-Client, measuring the interarrival time of packets for HTTP streaming with and without background traffic.

Image 8: Wireshark Capture HTTP Packets No Background Traffic.

Image 9: Wireshark Capture HTTP Packets with ICMP Background Traffic.

Image 10: Wireshark Capture HTTP Packets with HTTP Siege Background Traffic.

Interarrival times of packets were obtained in Wireshark. All obtained packets were filtered to display only HTTP packets. The HTTP packets were exported from Wireshark into a plain text file. The text file was executed in the proceeding code to isolate the interarrival times of all HTTP packets.

Image 11: Python Code to Extract Interarrival Times for HTTP Streaming Packets

The extracted HTTP interarrival packet times were re-written to a second plain text file. The frequency occurrence of interarrival time of HTTP packets was computed and subsequently re-ordered into ascending times using the following Bash Shell Script:

Image 12: Bash Shell Script for HTTP Interarrival Time Packet Frequency.

The output of the above Bash Shell Script produced an fdist file with the frequency calculation of all HTTP interarrival times in ascending order. The fdist file was executed against a further file containing gnuplot commands, calling on gnuplot to graph the outputted frequency distributions of HTTP packets.

Image 13: Gnuplot Script for HTTP Interarrival Time Packet Frequency.

HTTP Streaming No Background Traffic

The initial jitter calculation was conducted using a single source of generated HTTP traffic. The collected data was obtained over a two-minute period. Data was extracted and graphed in accordance with the presented methodology. Chart 17 represent the entire data collection collated over the two-minute period for HTTP streaming. Moreover, Chart 18 represents the same data collection. However, it is inclusive of only the first 1000 interarrival times captured. Chart 18 was included to provide a clearer representation of the interarrival times of the HTTP streaming from CentOS-Server to the CentOS-Client.

Chart 17: Frequency Distribution of HTTP Streaming Interarrival Times.

Chart 18: Frequency Distribution of First 1000 HTTP Packets Interarrival Times.

HTTP Streaming with ICMP Background Traffic

The second jitter calculation was conducted using two sources of generated network traffic, HTTP streaming and ICMP traffic from the CentOS-Server to the CentOS-Client. Similar, to the first measurement, traffic was captured over a two-minute period. The extraction and graphing of data occurred again using the presented methodology. Chart 19 depicts the interarrival time of all HTTP packets captured during the stream of HTTP video and background traffic of ICMP packets between the CentOS-Server and the CentOS-Client.

Chart 19: Frequency Distribution of HTTP Streaming with Background Traffic Interarrival Times.

Chart 20 represents the identical captured data as in Chart 19. However, the graph represents the first 1000 packets captured during the dual generation of network traffic.

Chart 20: Frequency Distribution of First 1000 HTTP Packets Interarrival Times with Background ICMP Traffic.

The frequency distribution of packet interarrival times for single source HTTP streaming traffic, when compared with that of HTTP streaming with background traffic possesses a lower distribution of arrival times. Moreover, packet interarrival times for single source HTTP streaming has a greater frequency of packet arriving within 0 and 0.05 microseconds. For HTTP streaming traffic, factoring both the lower distribution and frequency of interarrival times closers to zero, it can be concluded HTTP streaming with no background traffic provides a greater overall Quality of Service when compared with that of HTTP streaming with background ICMP traffic.

HTTP Streaming with HTTP Siege Background Traffic

The final jitter calculation was conducted again with two sources of traffic, HTTP streaming traffic from the CentOS-Server to the CentOS-Client and HTTP Siege traffic from the CentOS-Client to the CentOS-Server. In this test case traffic HTTP streaming traffic was collected within Wireshark for the entire duration of the HTTP Siege test, in 374.43 seconds. The extraction and graphical representation of captured data occurred in the same manner as above. Chart 21 depicts all captured HTTP packets of the HTTP streaming with background HTTP Siege test. Chart 22 and 23 depict a smaller sample of the capture packets the first 1000 and 100 packets respectively.  The Siege test was conducted using 50 concurrent users, each with a delay of 10 seconds, with a test repetition of three occurrences. The Siege test forced initiated concurrent users to utilise the Siege GET directive to obtain a 10MB zip file from the defined HTTPD server residing on the CentOS-Server.

Image 14: Initiation of HTTP Siege Test with HTTP Streaming.

Image 15: Output of HTTP Siege Test.

The following charts graphically represent the captured packets from the HTTP streaming with HTTP Siege background traffic.

Chart 21: Frequency Distribution of HTTP Packets Interarrival Times with Background HTTP Siege Traffic.

Chart 22: Frequency Distribution of First 1000 Packets of HTTP Packets Interarrival Times with Background HTTP Siege Traffic.

 

Chart 23: Frequency Distribution of First 100 Packets of HTTP Packets Interarrival Times with Background HTTP Siege Traffic.

HTTP streaming with HTTP Siege background traffic when compared with both HTTP streaming with no background traffic and with ICMP background traffic proved was identified to have a significantly larger measurement of jitter than that of its counter parts. The HTTP streaming with HTTP Siege background traffic was conducted over a larger time frame which must be accounted for. However, the frequency of interarrival times for HTTP streaming packets with HTTP Siege were significantly larger. Moreover, interarrival times for HTTP streaming with HTTP Siege background were distributed over a significantly large time frame than that of other tested HTTP streaming instances. Thus, it is determined that HTTP without streaming background when compared with HTTP streaming with either ICMP or HTTP Siege background traffic provides a greater overall Quality of Service with regard to HTTP streaming.

5.2.2        HTTP Streaming Round Trip Time (RTT) and Throughput Calculations

Wireshark was used in conjunction with their TCP stream graph display capabilities to graphically represent statistics regarding active TCP streams. When using TCP stream graphs a single packet must be selected for analysis. To maintain consistency, HTTP streaming without and with background traffic was analysed based on the final transmitted HTTP packet.

Round Trip Time (RTT) Calculations

When examining the RTT for HTTP streaming with no background traffic, the active TCP stream was capable for the most part of maintaining a RTT that was less than 0.05 seconds. This is shown in the proceeding graph.

Chart 24: Wireshark RTT Calculation for HTTP Streaming without Background Traffic.

RTT for HTTP streaming with ICMP background traffic, was also capable of maintaining a consistent RTT of less than 0.05 seconds as depicted in the following graph.

Chart 25: Wireshark RTT Calculation for HTTP Streaming with ICMP Background Traffic.

The active TCP streams for both HTTP stream without and with background ICMP traffic were capable of maintaining a consistent RTT of less than 0.05 seconds. However, closer examination clearly indicates that HTTP streaming without background traffic had a significantly greater amount of RTT’s that were closer to zero when compared with HTTP streaming with ICMP background traffic.  It is concluded therefore that HTTP streaming without background traffic has an overall lower RTT when transmitting HTTP streaming packet between the CentOS-Server and the CentOS-Client.

RTT for HTTP streaming with HTTP Siege background traffic, had a wide disbursement of average RTT. Whilst the majority of data captured maintained a RTT of less than five seconds, there were outliers with some captured data obtaining a RTT of close to 15 seconds.

Chart 26: Wireshark RTT Calculation for HTTP Streaming with HTTP Siege Background Traffic.

The active TCP streams for HTTP streaming with HTTP Siege background traffic was capable of maintaining a RTT of less than five seconds. However, when compared with that of HTTP streaming without background traffic and HTTP streaming with ICMP background traffic, it is evident that HTTP streaming with HTTP Siege background traffic produces an overall greater RTT. The requirement of the network to support two heavy loads of HTTP traffic, ensued that overall HTTP streaming quality was reduced. Thus, it is concluded that HTTP streaming service quality is not feasible when applied in conjunction with heavy HTTP network load such as HTTP Siege.

Throughput Calculations

The examination of average throughput for HTTP streaming with no background traffic was consistent in maintain an average of less than 500000 Bits/sec. Whilst some greater throughput rates are present overall average throughput remain consistent with regard to HTTP streaming with no background traffic. The findings are outlined in the graph below.

Chart 27: Wireshark Average Throughput Calculation for HTTP Streaming without Background Traffic.

The average throughput for HTTP streaming with background ICMP traffic, was similar to that of HTTP streaming with no background traffic, in that a consistent average of less than 500000 Bits/sec was maintained. However, significantly more outliers are present, with average throughput reaching above 15000000 Bits/sec in some instances.

Chart 28: Wireshark Average Throughput Calculation for HTTP Streaming with Background ICMP Traffic.

Given the identified throughputs for both HTTP streaming with and without ICMP background traffic were capable of consistently maintaining averages under 100000 Bits/sec, determination regarding which provides an overall better Quality of Service is challenging. However, judgements in this instance will be made on the perceived quantity and disbursement of average throughput streaming outliers. Thus, given HTTP streaming with ICMP background traffic holds a greater number of averages throughout outliers, collectively occurring at an increased average throughput rate, it is deemed that HTTP streaming without background traffic provides a greater overall Quality of Service.

The average throughput for HTTP streaming with background HTTP Siege traffic, was dissimilar to that of both HTTP streaming with no background traffic, and HTTP streaming with ICMP background traffic. Whilst HTTP streaming with HTTP Siege background traffic was capable of maintaining an average of 500000 Bits/sec the dispersion of through-putted times was greater.

Chart 29: Wireshark Average Throughput Calculation for HTTP Streaming with Background HTTP Siege Traffic.

HTTP streaming with HTTP Siege background traffic was incomparable to that of HTTP streaming with no background traffic and HTTP streaming with ICMP background traffic. HTTP streaming with background HTTP Siege traffic maintained an average of less than 500000 Bits/sec. However, the distribution of times in order to achieve this throughput rate was significant. Moreover, the majority of average throughput times were collated at greater than 350 seconds. This is a drastic difference when compared to its counterparts. Thus, HTTP streaming with HTTP Siege background traffic should not be considered if for HTTP streaming and overall QoS is not achieved.

5.3      UDP Streaming Jitter Calculations

UDP streaming in VLC was unable to be conducted. The reinstallation and reconfiguration of VLC was attempted numerous times. After attending a consolation with Dr. Abdul Malik Khan, I was advised that this was a software error and could not be helped. The error log produced by VLC on the CentOS-Server is shown below.

Image 16: VLC UDP Streaming Error Log.

5.4      HTTP Siege

Web based load testing was executed on the CentOS-Server. Siege in conjunction with MIB Browser was executed on the CentOS-Client, targeting the CentOS-Server. Two tests were conducted in order to determine the feasibility of Siege as a load testing mechanism for the implemented HTTPD server.

HTTP Siege with 2MB File

Web based load testing was executed on the CentOS-Server. Siege in conjunction with MIB Browser was executed on the CentOS-Client, targeting the CentOS-Server. In this instance a HTTPD web server was constructed on the CentOS-Server. Upon completion the web server was loaded with a large JPEG image for testing. The large image size is a 5400×2700 2MB JPEG. The HTTP siege was executed to retrieve the large image file from the CentOS-Sever. The command presented below was executed to perform a Siege HTTP GET request from the CentOS-Server.

Image 17: Command for HTTP Siege Execution.

The siege command was executed in conjunction with the following operational parameter.

-d indicates the delay in seconds for each simulated concurrent siege user.

-c the number of concurrent users implemented in the Siege test.

-r depicts the number of repetitions to be conducted of the Siege test instance.

Image 18: HTTP Siege in Occurrence

Image 19: Result of HTTP Siege Command Execution.

The initiation of Siege upon the implemented HTTPD server on the CentOS-Server was executed using the HTTP GET directive. 25 concurrent users were instigated to place the server under Siege. For the 5.59 second period of which the server was under Siege, 36 transactions were conducted all of which were successful. Data transferred from the HTTPD CentOS-Server to the CentOS-Client amounted to be 79.72MB with a transaction rate of 6.44 transactions per second utilising a throughput rate of 14.48 MB/sec. Times were documented for all transaction with further reporting regarding both the longest and shortest transaction time. The conducted test documented the shortest transaction time to be 0.28 seconds and the longest transaction time to be 4.32 seconds. The continuation of concurrent transaction from the HTTPD server to the CentOS-Client depicts a continually increasing transaction time as subsequent transactions from users are initiated.

The HTTP server Siege was monitored using the outputted statistics as reported by Siege itself, in conjunction with MIB Browser. Two OID’s were monitored consecutively during the execution of the HTTPD server Siege. tcpActiveOpens.0 identifies and accounts for the total number of instances in which a TCP connection transitions to the SYN-SENT from the closed state.

Chart 30: HTTP Siege Monitoring MIB Browser tcpActiveOpens

The second monitored OID using MIB Browser was the tcpPassiveOpens.0. The tcpPassiveOpens.0 OID documents the total number of instances where TCP connections transition to the SYN-RCVD state from the LISTEN state.

Chart 31: HTTP Siege Monitoring MIB Browser tcpPassiveOpens.

Both the tcpActiveOpens and the tcpPassiveOpens OID’s were proficient in the identification of the conducted HTTPD server Siege. This is evident in the captured HTTP Siege TCP traffic originating from the CentOS-Client directed towards the CentOS-Server. The application of Siege was implemented as a testing mechanism to determine the load capacity to which the server is capable of withstanding. The execution period of the HTTPD Siege stress test was restricted, thus the HTTPD server was capable of withstanding the implemented load capacity.

HTTP Siege with 10MB File

Similarly, web-based load testing was implemented to target the CentOS-Server machine from the CentOS-Client. The application of this test occurred through the loading of the CentOS-Server HTTPD server with a 10MP.zip file. The HTTP Siege test was executed attempting to retrieve the 10MB zip from the defined server.

Image 20: HTTP Siege Execution Command and Siege in Occurrence

Image 21: Results of Executed Siege Command.

The execution of Siege upon the HTTPD server occurred making used of the HTTP GET Directive. Similar to the previous test, the same parameters regarding delay, concurrent users, and repetitions. The HTTP Siege test was executed for a period of 70.81 seconds, where 75 transactions were conducted all of which were successful. The data transmitted amountted to be 750.00MB occurring across a throughput of 10.59MB per seconds using a transaction rate of 1.06 transactions per second. The longest transaction recorded was 28.53 seconds and the shortest transaction period was 2.78 seconds. The concurrent simulation of transactions produced from the CentOS-Server to the CentOS-Server produced a continual increase in individual transaction times. Similar to the previously conducted Siege HTTP load test, both tcpAcitveOpens and tcpPassiveOpens OID’s were used to monitor HTTP Siege network activity.

Chart 32: HTTP Siege Monitoring MIB Browser tcpActiveOpens

Chart 33: HTTP Siege Monitoring MIB Browser tcpPassiveOpens

From the presented charts, it is evident that the implemented OID’s from MIB Browser operation on the CentOS-Client machine were capable of identifying the executed the HTTP Siege. When comparing the applied Siege tests, it is evident from the outputted results that the HTTP Siege on the 2MB Jpeg file produced minimal load disruption for the CentOS-Server. However, the second implemented test utilising the 10MB file produced significant impact upon the capabilities of the HTTPD server to perform at optimum rates. Significant changes to both the throughput and transaction rates when attempting to retrieve the 10MB file indicated as expected that a greater file size produces a greater load upon the implemented HTTPD server.

5.5      IPERF Network Measurements

All Iperf network measurements were conducted between the CentOS-Client and the CentOS-Server machines.

5.5.1        TCP Measurements

Single Direction TCP Measurements

TCP maximum throughput measurements were taken between the CentOS-Client and the CentOS-Server machine. The TCP transmission was established on the CentOS-Server machine and connected to by the CentOS-Client machine.

Image 22: Iperf TCP Connection Establishment CentOS-Server

Image 23: Iperf TCP Connection CentOS-Client

The TCP connection between the CentOS-Client and the CentOS-Server machines was capable of through putting 107Mbytes of data utilising a bandwidth of 89.1Mbits/sec on the CentOS-Server machine and 89.1Mbits/sec on the CentOS-Client machine.

Bidirectional TCP Measurements

TCP maximum bidirectional throughput measurements examined amid the CentOS-Client and the CentOS-Server machine. The TCP transmission was created on the CentOS-Server machine and connected to by the CentOS-Client machine.

Image 24: Iperf TCP Bidirectional Connection Establishment CentOS-Server

Image 25: Iperf TCP Bidirectional Connection CentOS-Client

The bidirectional TCP throughput measurements between the CentOS-Client and the CentOS-Server deduced that on one connection a transfer of 67.4Mbytes over a bandwidth of 56.3Mbits/sec was capable, whilst on the other connection an increased transmission of 80.1Mbytes over a 67.0Mbytes/sec occurred. It is evident that a bidirectional TCP connection has on overall reduction in both transmission size and bandwidth capabilities when compared with the single direction TCP transmission. This is due to the requirement of network resource division in order to appropriate the requests.

Maximum Transmission Unit

The Maximum Transmission Unit (MTU) is the largest packet that can be transmitted across a specified connection. In this instance the MTU was identified for a single direction TCP connection between the CentOS-Client and the CentOS-Server machines.

Image 26: Iperf TCP MTU CentOS-Server

Image 27: Iperf TCP MTU CentOS-Client

Iperf was capable of identifying that the MTU of TCP packets across the specified network is 1500 bytes when utilising an ethernet connection.

5.5.2        UDP Measurements

Single Direction UDP Measurements

Single direction maximum throughput UDP measurements were conducted between the CentOS-Client and the CentOS-Server machines. UDP transmission was initiated by the CentOS-Server and subsequently connected to by the CentOS-Client machine.

Image 28:  Iperf UDP Connection Initiation CentOS-Server

Image 29:  Iperf UDP Connection CentOS-Client

The UDP connection established across the network between the CentOS-Client and the CentOS-Server was capable of transmitting 1.25Mbytes over a 1.05Mbit/sec bandwidth on the CentOS-Server. However, with regard to the CentOS-Server throughput capabilities the transmission of UDP datagrams to the CentOS-Client occurred with a jitter rate – mean interarrival time of transmitted packets – of 0.288ms. Moreover, the CentOS-Client was capable of receiving the UDP datagram transmission at the same rates of transmission from the CentOS-Server.

When compared with the single direction TCP connection measurements observed, a 1.05Mbits/sec bandwidth is significantly less than that of the TCP bandwidth of 89.1Mbits/sec. This occurrence is due to the restricted nature of UDP transmission of 1Mbit/sec as defined by Iperf. In order to combat this concern, the proceeding test will define the maximum bandwidth rate to be used for UDP transmission.

Single Direction UDP Increased Bandwidth Measurements

In an attempt to combat the concern of restricted bandwidth within Iperf, the proceeding measurement test will explicitly define the maximum bandwidth rate to be implemented for UDP transmissions between the CentOS-Client and the CentOS-Server. In this instance, the maximum bandwidth rate was confined to 1000Mbits/sec.

Image 30:  Iperf UDP Increased Bandwidth Connection CentOS-Server

Image 31:  Iperf UDP Increased Bandwidth Connection CentOS-Client

The increase of bandwidth for UDP connections produced a myriad of concerns for the successful transmission of UDP datagrams between network devices. The bandwidth itself increased to 181Mbits/sec, with a transfer rate of 67.3Mbytes. The increase in bandwidth consequently produced an increase in the jitter rate to 0.245ms. However, whilst these are all relevant issues regarding the QoS of the network, the most detrimental finding regarding the increase in UDP transmission bandwidth was the total number of lost datagrams amounting to 79 percent of all transmitted datagrams. UDP in nature, has no guarantee for packet delivery, nor can the lost packets be retransmitted.  Given that the transmission of UDP packets is not impacted by network latency, it is concluded that the significant packet loss is the result of network congestion.

Single Direction UDP Packet Loss Measurements

The measurement of packet loss within UDP transmission streams is quintessential in support the overall QoS of a network. Each transmitted UDP datagram is comprised of multiple individual IP packets. The loss of an individual packet will not result in the loss of the entire datagram. In order to measure packet loss across UDP connections, the datagram size must be limited enough that it is capable of compressing to the size of a single IP packet. The application of the Iperf option -i permitted this testing.

Image 32:  Iperf UDP Datagram Compression Connection CentOS-Server.

Image 33:  Iperf UDP Datagram Compression Connection CentOS-Client.

The measurement of packet loss within UDP transmission along the presented network determined the that compression of UDP datagrams into individual IP packets resulted in no packet loss. This further support the previous conclusion of significant datagram loss due to network congestion. In this measurement, bandwidth was restricted to a maximum of 10Mbits/sec. Moreover, jitter calculations are computed for each transmitted IP packet.

6.0    Conclusion

This report endeavours to outline, conduct and measure network metrics with regard to both SNMP and Quality of Service. Measurements were withdrawn from the implemented network architecture, containing two individual networks with communication capability. The obtained results and stated conclusion were drawn based on the constructed network. Thus, the findings of this examination may not be applicable for generalisation in real world network settings.



Recommendation
EssayHub’s Community of Professional Tutors & Editors
Tutoring Service, EssayHub
Professional Essay Writers for Hire
Essay Writing Service, EssayPro
Professional Custom
Professional Custom Essay Writing Services
In need of qualified essay help online or professional assistance with your research paper?
Browsing the web for a reliable custom writing service to give you a hand with college assignment?
Out of time and require quick and moreover effective support with your term paper or dissertation?
Did you find someone who can help?

Fast, Quality and Secure Essay Writing Help 24/7!