13
Research Article Designing of a Simulation Tool for the Performance Analysis of Hybrid Data Center Networks (DCNs) Muhib Ahmad , 1 Farrukh Zeeshan Khan , 1 Zeshan Iqbal , 1 Muneer Ahmad , 2 Ihsan Ali , 2 Sultan S. Alshamrani, 3 Muhammad Talha , 4 and Muhammad Ahsan Raza 5 1 Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan 2 Faculty of Computer Science & Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia 3 Department of Information Technology, College of Computer and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia 4 Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia 5 Department of Information Technology, Bahauddin Zakariya University, Multan 60000, Pakistan Correspondence should be addressed to Ihsan Ali; [email protected] Received 17 February 2021; Revised 13 March 2021; Accepted 24 March 2021; Published 10 April 2021 Academic Editor: M. Irfan Uddin Copyright©2021MuhibAhmadetal.isisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data center (DC) technology changes the mode of computing. Traditional DCs consist of a single layer and only have Ethernet connections among switches. ose old-fashioned DCs cannot fulfill the high resource demand compared with today’s DCs. e architectural design of the DCs is getting substantial importance and acting as the backbone of the network because of its essential feature of supporting and maintaining the rapidly increasing Internet-based applications which include search engines (e.g., Google and Yandex) and social networking applications (e.g., YouTube, Twitter, and Facebook). Every application has its parameters, like latency and blocking in the DC network. Every data center network (DCN) has its specialized architecture. It has a specific arrangement of layers and switches, which increase or decrease the DC network’s efficiency. We develop a simulation tool that comprises two different DC architectures: basic tree architecture and c-rough architecture. Using this simulation, we analyze the traffic behavior and the performance of the simulated DCN. Our main purpose is to focus on mean waiting time, load, and blocking with respect to the traffic within the DCN. 1. Introduction In this growing era of computing, thousands of applications are introduced; moreover, those applications had millions of users at a time. e people interacting with these applica- tions belong to different domains. ese millions of users interact with thousands of applications at a time daily. In this changing era of technology, the idea called “Cloud Com- puting or Mobile Computing” arose. Cloud computing or mobile computing introduces a completely new computing technology mode [1]. is new era of technology highly increases the number of applications that result in an in- creased number of users. Now, millions of people connect or interact and use them [2]. To assist the idea of this new era of technology, a mode of technology came named as datacenter (DC). e cost of the data center network is equal to the price of the old parallel computer network but with a huge difference in bandwidth, latency, reliability, security, modularity as well as power consumption. As the Internet world grows, the Internet and cloud computing-based applications have appeared. is speedy evolution of the internetwork and cloud computing and cloud computing-based applications demand maximum bandwidth, maximum throughput, and low latency [3]. Moreover, the interconnection also ensures the reliability of the network. e rapid emergence in Internet traffic due to the number of new emerging web applications has increased the demand for network bandwidth in the DCN. e traffic in the network employing using a particular application increases day-by-day exponentially. DC architectural design is getting substantial importance and acting as the backbone of the network because of its essential feature of supporting Hindawi Complexity Volume 2021, Article ID 5520324, 13 pages https://doi.org/10.1155/2021/5520324

Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

Research ArticleDesigning of a Simulation Tool for the Performance Analysis ofHybrid Data Center Networks (DCNs)

Muhib Ahmad 1 Farrukh Zeeshan Khan 1 Zeshan Iqbal 1 Muneer Ahmad 2

Ihsan Ali 2 Sultan S Alshamrani3 Muhammad Talha 4 and Muhammad Ahsan Raza 5

1Department of Computer Science University of Engineering and Technology Taxila Pakistan2Faculty of Computer Science amp Information Technology Universiti Malaya Kuala Lumpur 50603 Malaysia3Department of Information Technology College of Computer and Information Technology Taif University PO Box 11099Taif 21944 Saudi Arabia4Deanship of Scientific Research King Saud University Riyadh Saudi Arabia5Department of Information Technology Bahauddin Zakariya University Multan 60000 Pakistan

Correspondence should be addressed to Ihsan Ali ihsanalichdsiswaumedumy

Received 17 February 2021 Revised 13 March 2021 Accepted 24 March 2021 Published 10 April 2021

Academic Editor M Irfan Uddin

Copyright copy 2021Muhib Ahmad et al)is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Data center (DC) technology changes the mode of computing Traditional DCs consist of a single layer and only have Ethernetconnections among switches )ose old-fashioned DCs cannot fulfill the high resource demand compared with todayrsquos DCs )earchitectural design of the DCs is getting substantial importance and acting as the backbone of the network because of its essentialfeature of supporting and maintaining the rapidly increasing Internet-based applications which include search engines (egGoogle and Yandex) and social networking applications (eg YouTube Twitter and Facebook) Every application has itsparameters like latency and blocking in the DC network Every data center network (DCN) has its specialized architecture It has aspecific arrangement of layers and switches which increase or decrease the DC networkrsquos efficiency We develop a simulation toolthat comprises two different DC architectures basic tree architecture and c-)rough architecture Using this simulation weanalyze the traffic behavior and the performance of the simulated DCN Our main purpose is to focus on mean waiting time loadand blocking with respect to the traffic within the DCN

1 Introduction

In this growing era of computing thousands of applicationsare introduced moreover those applications had millions ofusers at a time )e people interacting with these applica-tions belong to different domains )ese millions of usersinteract with thousands of applications at a time daily In thischanging era of technology the idea called ldquoCloud Com-puting or Mobile Computingrdquo arose Cloud computing ormobile computing introduces a completely new computingtechnology mode [1] )is new era of technology highlyincreases the number of applications that result in an in-creased number of users Now millions of people connect orinteract and use them [2]

To assist the idea of this new era of technology a mode oftechnology came named as datacenter (DC) )e cost of the

data center network is equal to the price of the old parallelcomputer network but with a huge difference in bandwidthlatency reliability security modularity as well as powerconsumption As the Internet world grows the Internet andcloud computing-based applications have appeared )isspeedy evolution of the internetwork and cloud computingand cloud computing-based applications demandmaximumbandwidth maximum throughput and low latency [3]Moreover the interconnection also ensures the reliability ofthe network )e rapid emergence in Internet traffic due tothe number of new emerging web applications has increasedthe demand for network bandwidth in the DCN )e trafficin the network employing using a particular applicationincreases day-by-day exponentially DC architectural designis getting substantial importance and acting as the backboneof the network because of its essential feature of supporting

HindawiComplexityVolume 2021 Article ID 5520324 13 pageshttpsdoiorg10115520215520324

and maintaining the rapidly increasing internetwork-basedapplications which include large-scaled computations (egindexing bioinformatics and data mining) web hostingvideo hosting with its distribution (eg YouTube andNetFlix) as well as includes the huge number of socialnetworking applications (eg Facebook and Twitter) [4])e DCNs serve as a backbone in a new model of computingto meet the huge numbers of usersrsquo needs in a more efficientway

From the last decades the highest increase in the In-ternet traffic predominantly directed from egressing thenumber of web applications like social networking pluscloud computing bent the requirement of additional dy-namic and energetic DCs )e servers will experience verylow latency communicating if the DCs grow greater in sizecontaining thousands of servers per rack C set a majorchallenge related to DCs networking constructing the re-quired essential wish to build a new effective interconnectioninfrastructure with maximum bandwidth and minimumlatency As the size of the warehouse-scale (WS) data centerincreases communication between the inter as well as theintradata center becomes more challenging in terms of interas well as intradatacenter communication [5] power con-sumption and high bandwidth requirements and smalllatency delay [3 6]

2 DC Architecture

As technology grows the different DC architecturesemerged to support computing demands Currently theDCN architectures are based on a standard layered ap-proach as shown in Figure 1 which passes through severaltests and has been improved over the past several years )eDC infrastructurersquos layered approach sometimes also calledthe hierarchical internetworking model [2] includes thecore layer aggregation layer and access layers

)e core layer is the uppermost layer in the design ofDCN )is core layer is responsible for the fast transmissionof data in the whole network that is why this core layer isrecognized as the backbone of the DCN because all otherlayers depend on this core layer )is core layer is not re-sponsible for routing traffic (packets) at the LAN Moreoverthe switches used in this layer do not deal with packetcontrolling It has high-level duties in the network and takesmost of the time designing the DCN [7]

)is layerrsquos main purpose is to provide minimum latencytime in the delivery of packets It consists of high-speedcables called fiber cables because of its high transmission rateand minimum reconfiguration time and high-end switches)e key term that the core layer majorly focuses on is ef-ficiency )e essential features considered in designing theDC core layer switch are that the core layerrsquos main andimportant factor is speed )e core layer provides a maxi-mum data transfer rate through load sharing becausemultiple connections are used in the network through whichthe traffic travels Multiple connections are used in thenetwork to maximize the networkrsquos fault tolerance rate Inaddition the second factor is the core layer of DCN whichdeals with high-speed packet switching for all the ingress and

egress packets in DCN )e core layer of DCN also offersfabric for high-speed packet switching among multipleaggregation modules in the DCN It promises resilient fabricLayer 3 routed without any point of failure as shown inFigure 1

After the core layer in DC infrastructure the aggregationlayer is designed It is the middle layer and acts as a mediumto connect the core and access layers )e aggregation layerconsists of Layer 3 switches and LAN-based routers )islayer makes sure that the packets are accurately transmittedamong subnets and VLANs or in the network )is layer ofthe DCN is also known as the workgroup layer Due to itshuge number of tasks performed in DCN it is also called thedistribution layer or services layer It provides vital func-tions for example service module integration minimumspanning tree execution as default gateway redundancy [7]Multi-tier traffic from server-to-server travels through theaggregation layer and utilizes multiple services like firewallload balancing and optimization applications [8 9]

In Figure 1 the block switches icons in the aggregationlayer show the integrated service modules )e main aspectof the aggregation layer is routing )is means that theaggregation layer is responsible for route and regulates thepacket transmission based on information of its origin anddestination and to build a network border of communica-tion )e layer provides an opportunity to create differentprotocol gateways for ingress and egress multiple networkarchitectures )is layer helps and acts as the limit or edgesfor broadcast and multicast domains [10] )e Layer 3switches or routers examine packets and then prioritizedelivering the packets to the destination based on the pol-icies and rules you set and applied

)e switches in the aggregation layer also have 10 GigElinks with the switches which are also used as ToR switch inthe access layer Moreover it also has four pairs of 10 GigElinks (4times10 40 GigE) with other aggregation moduleswhen using multiple aggregation modules in the data centernetwork as shown in Figure 1 )e 40 GigE links to join themultiple aggregation modules give high bandwidth avoidpath blocking and quick convergence and avoid the issues ofoverwhelming and arbitrarily broadcast

In DCN the bottommost and the last layer is accesslayer It is the edge of the DCNwhere the switches called Topof Rack (ToR) switches are attached to the servers as shownin Figure 1 )e DC access layer offers physical connectionsto the servers and works in Layer 2 or Layer 3 modes )eprimary oversubscription point is coming in the access layerbecause it aggregates the traffic of the server onto the 1 or 10GigE channel uplinks in the DC aggregation layer In theaccess layer servers are physically connected to the network[7]

)e modules of the server involve 1 rack unit (RU)servers blade servers attached with integral switchesnumber of the blade and clustered servers attached by pass-through cabling Moreover the access layer architecture alsoincludes configured 1RU or 2RU switches modular switchesas well as integral blade server switches Layer 2 and Layer 3-based architectures are done by switches used in this layerby satisfying the server broadcast domains )e DC access

2 Complexity

layer provides an opportunity to program a switch so thatonly particular systems on the network can access the relatedor joined LANs Switch facilitates the nodes on the networkby creating separate collision domains for every singleconnected node to enhance the performance of the networkBy using load balancing data can be moved from onenetwork to another network

In the access layer ToR switches are usually used in everyrack and have 1 GigE or 10 GigE slots connections to thenetwork )ese ToR switches connect to the number ofservers that include the application servers web servers anddata storage servers as shown in Figure 1 )is ToR ar-chitecture minimizes the cabling structure )ese ToRswitches are in greater numbers as compared with theswitches used in the aggregation and the core layer A singleDC involves more than thousands of servers arranged in arack of 20ndash40 servers each [11 12] Servers within each rackare attached to the ToR switch as shown in Figure 2 whichare then further connected to the layers of clusters of

switches in DC)e ToR switches connect to the aggregationlayer switches through various high-speed connections of 10GigE that are clustered to one another in a port channel )epacket travels through the core and aggregation layer Basedon the information in the packet ToR examines and analysesthe packets and fetches the data from the required server Allthe information is encapsulated in the packet [7 13]Generally less than 25 of average network peak load is heldby DCNs and large number of links in DCN stay idle foralmost 70 of the time [14]

)e important issue in DC traffic management is loadbalancing intended as reducing packet delay in internetworkand intranetwork moreover it is also able to redirect theflows by assessing energy-efficient targets )e primaryfactor of energy consumption in DCs is servers and hugenumber of running servers results in higher power con-sumption)e longer the traffic stays in the DCN it not onlyaffects the energy consumption but also affect packets delayHence the optimization of DCN servers utilization results in

`

Campus core

Core

Aggregation 2Aggregation 3

Aggregation 4

DC core

DCaggregation

DCaccess

L2L3

Layer 2 access with clustering andNC teaming

Blade chases with passthru models

Mainframewith OSA

Blade chases withintegrated switch

Layer 3 access with smallbroadcast domain and

isolated servers10Gb EthernetGigabit Ethernet or EtherchannelBackup

Figure 1 DCN layered architecture connected to different servers through ToR

Complexity 3

more efficient energy savings Currently a new hot researcharea in DCs is virtual machines (VMs) placement )roughappropriately placing VMs wrt the network and applica-tions resource requirements the network manager is able tosave the superfluous resources for the other services andmake the DCN much energy-efficient [15] In this paper weare not considering the whole network and placing the VMsWe are just modeling the traffic within the DC network[8 16] VMs placement and task distribution on differentVMs are not considered because we only consider the trafficwithin the DCN (inter and intra DC network traffic)

3 Optical and Hybrid Data Center Architecture

31 OSA )e first design that uses optical switchingtechnology in DCN is named OSA as shown in Figure 3OSA DC architecture is a ldquopurerdquo optical switching networkwhich means that this architecture left the idea of switchesbased on electrical core and uses the idea of optical switchesto build the switching cores [17]

While the ToR switches are electrical so far so totransform and operate into optical-based and electrical-basedsignals among different servers and switching cores In OSAarchitecture the switching core has several connections oneach ToR switch )e arrangement of connections is con-siderably flexible depending on the traffic demand

Meanwhile the architecture remained unable to providethe direct optical links between each rack pair according tothe traffic demands the design built by the controllingsystem creates a connected graph topology and the ToRswitches are liable to deliver the traffic among other ToRswitches )erefore in OSA architecture several opticallinks are provided to each rack [4]

32 c-5rough Wang et al proposed the first hybrid elec-trical-optical network model data center network namedc-)rough It is a three-level multirooted hybrid electricaland optical network model of DCN as shown in Figure 4 asan advancement to the current DCNs c-)rough is alsoknown as HyPaC (hybrid packet and circuit) DCN Inc-)rough architecture ToR switches are linked to bothnetworks made from electrical packet-based networks(Ethernet) and optical circuit-based networks [18]

)is shows that it has two major parts first a tree-likeelectrical network that keeps the connectivity among everypair of switches of ToR and the second is a reconfigurableoptical-circuit network that provides maximum bandwidthamong various rack of servers )e optical switch is con-figured so that each pair of the rack with a high bandwidthrequest is attached with this optical switch Due to the highcost of optical networks and promising maximum band-width between the optical links there is no need to set theoptical links to every single pair of racks increasing the costof the network [19] In c-)rough network the optical partrsquosconfiguration depends upon the traffic among the differenttypes of racks of servers c-)rough architecture evaluatesthe rack-to-rack traffic demand by detecting the socketbufferrsquos habitation Each rack has a single optical link so thewhole architecture is reconfigurable at any time

A traffic monitoring system is required that is positionedin hosts thus it calculates the bandwidth demand with theother hosts in the network )e optical configurationmanager accumulates these measurements plus understandsthe optical switch configuration that depends on the trafficrequirements )e nature of traffic requirements and theattached links are all expressed as maximum weight-perfectmatching problems )at is why in the c-)rough network

Aggregation switch

Networkrack

Rack mountedserver

ToRswitch

Figure 2 ToR switch connection with each server in the rack

4 Complexity

the Edmonds algorithm is used as the key of the maximumweight-perfect matching problem [4] After the opticalswitchesrsquo configuration the optical manager notifies the ToRswitches to route the packet based on information Traffic inToR switches is demultiplexed through virtual local areanetwork (VLAN)-based routing [20]

Two different types of VLAN-based routing are usedbecause of the electrical-optical network model one is forpacket network-based routing and the second for the opticalcircuit (OC) network-based routing [18]

33 Helios In 2010 Farrington et al from UCSD proposedan electrical-optical switched network model for modularDCN as mentioned in [4] which is known as Helios Heliosis the other hybrid network based on the electrical-opticalnetwork model just like the c-)rough design architecturehowever the main difference in that architecture is that it is

based on wavelength division multiplexing (WDM) links[4] as shown in Figure 5

It is a two-level multirooted hybrid electrical and opticalnetwork model of DCN )is hybrid architecture consists ofcore switches and ToR switches known as Pod switches )ePod or ToR switches are electrical packet-based switches Incontrast on the other hand the core switches are either theelectrical packet-based switches or maybe the OC (opticalcircuit)-based switches [19 21])us the electrical packet-basedswitches are all involved in all ToR switches communication

In contrast OC-based switches are employed for varyinglarge bandwidth communication among ToR switches Heliosarchitecture uses electrical packet-based switching to allocatethe networkrsquos erupt type of traffic flow In contrast OC-basedswitched network facilitates slow-changing traffic with highbandwidth Each pod or ToR switch consists of colorlessoptical transceivers andWDM-based optical transceivers)euncolored optical transceivers link the pod switches and thecore electrical packet-based switches Contrary to this opticalmultiplexer multiplexes the WDM optical transceivers thatform the super links attached to the OC switches

)emain disadvantage of this suggested approach is thatit consists of micro-electro-mechanical systems switches(MEMS) therefore the circuit switch wants various times(in milliseconds) in case of any reconfiguration (thereconfiguration time of Glimmerglass switch was 25ms)Hence this approach is perfect for those applications whereconnections among some nodes end at more than a fewseconds to facilitate the reconfiguration overhead [19]Among all the uplinks half of the uplinks are attached toelectrical switches and the remaining half uplinks are at-tached to the optical switches through the optical multi-plexer )ese super links are capable of transmitting w times 10Gigabit per second (w total number of wavelengthsstarting from 1 to 32) [19]

Helios architecturersquos major advantage is that its archi-tecture is built on easily accessible optical transceivers andmodules commonly used in optical telecommunication

Core switch

Aggregateswitch

ToR

Optical circuitswitch

Figure 4 c-)rough architecture

WSS

Optical switching matrix

C C C Fromto another ToRhellip

helliphellip

Coupler

ToR

MUX DEMUX

hellip hellip

hellip

Figure 3 OSA architecture

Complexity 5

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 2: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

and maintaining the rapidly increasing internetwork-basedapplications which include large-scaled computations (egindexing bioinformatics and data mining) web hostingvideo hosting with its distribution (eg YouTube andNetFlix) as well as includes the huge number of socialnetworking applications (eg Facebook and Twitter) [4])e DCNs serve as a backbone in a new model of computingto meet the huge numbers of usersrsquo needs in a more efficientway

From the last decades the highest increase in the In-ternet traffic predominantly directed from egressing thenumber of web applications like social networking pluscloud computing bent the requirement of additional dy-namic and energetic DCs )e servers will experience verylow latency communicating if the DCs grow greater in sizecontaining thousands of servers per rack C set a majorchallenge related to DCs networking constructing the re-quired essential wish to build a new effective interconnectioninfrastructure with maximum bandwidth and minimumlatency As the size of the warehouse-scale (WS) data centerincreases communication between the inter as well as theintradata center becomes more challenging in terms of interas well as intradatacenter communication [5] power con-sumption and high bandwidth requirements and smalllatency delay [3 6]

2 DC Architecture

As technology grows the different DC architecturesemerged to support computing demands Currently theDCN architectures are based on a standard layered ap-proach as shown in Figure 1 which passes through severaltests and has been improved over the past several years )eDC infrastructurersquos layered approach sometimes also calledthe hierarchical internetworking model [2] includes thecore layer aggregation layer and access layers

)e core layer is the uppermost layer in the design ofDCN )is core layer is responsible for the fast transmissionof data in the whole network that is why this core layer isrecognized as the backbone of the DCN because all otherlayers depend on this core layer )is core layer is not re-sponsible for routing traffic (packets) at the LAN Moreoverthe switches used in this layer do not deal with packetcontrolling It has high-level duties in the network and takesmost of the time designing the DCN [7]

)is layerrsquos main purpose is to provide minimum latencytime in the delivery of packets It consists of high-speedcables called fiber cables because of its high transmission rateand minimum reconfiguration time and high-end switches)e key term that the core layer majorly focuses on is ef-ficiency )e essential features considered in designing theDC core layer switch are that the core layerrsquos main andimportant factor is speed )e core layer provides a maxi-mum data transfer rate through load sharing becausemultiple connections are used in the network through whichthe traffic travels Multiple connections are used in thenetwork to maximize the networkrsquos fault tolerance rate Inaddition the second factor is the core layer of DCN whichdeals with high-speed packet switching for all the ingress and

egress packets in DCN )e core layer of DCN also offersfabric for high-speed packet switching among multipleaggregation modules in the DCN It promises resilient fabricLayer 3 routed without any point of failure as shown inFigure 1

After the core layer in DC infrastructure the aggregationlayer is designed It is the middle layer and acts as a mediumto connect the core and access layers )e aggregation layerconsists of Layer 3 switches and LAN-based routers )islayer makes sure that the packets are accurately transmittedamong subnets and VLANs or in the network )is layer ofthe DCN is also known as the workgroup layer Due to itshuge number of tasks performed in DCN it is also called thedistribution layer or services layer It provides vital func-tions for example service module integration minimumspanning tree execution as default gateway redundancy [7]Multi-tier traffic from server-to-server travels through theaggregation layer and utilizes multiple services like firewallload balancing and optimization applications [8 9]

In Figure 1 the block switches icons in the aggregationlayer show the integrated service modules )e main aspectof the aggregation layer is routing )is means that theaggregation layer is responsible for route and regulates thepacket transmission based on information of its origin anddestination and to build a network border of communica-tion )e layer provides an opportunity to create differentprotocol gateways for ingress and egress multiple networkarchitectures )is layer helps and acts as the limit or edgesfor broadcast and multicast domains [10] )e Layer 3switches or routers examine packets and then prioritizedelivering the packets to the destination based on the pol-icies and rules you set and applied

)e switches in the aggregation layer also have 10 GigElinks with the switches which are also used as ToR switch inthe access layer Moreover it also has four pairs of 10 GigElinks (4times10 40 GigE) with other aggregation moduleswhen using multiple aggregation modules in the data centernetwork as shown in Figure 1 )e 40 GigE links to join themultiple aggregation modules give high bandwidth avoidpath blocking and quick convergence and avoid the issues ofoverwhelming and arbitrarily broadcast

In DCN the bottommost and the last layer is accesslayer It is the edge of the DCNwhere the switches called Topof Rack (ToR) switches are attached to the servers as shownin Figure 1 )e DC access layer offers physical connectionsto the servers and works in Layer 2 or Layer 3 modes )eprimary oversubscription point is coming in the access layerbecause it aggregates the traffic of the server onto the 1 or 10GigE channel uplinks in the DC aggregation layer In theaccess layer servers are physically connected to the network[7]

)e modules of the server involve 1 rack unit (RU)servers blade servers attached with integral switchesnumber of the blade and clustered servers attached by pass-through cabling Moreover the access layer architecture alsoincludes configured 1RU or 2RU switches modular switchesas well as integral blade server switches Layer 2 and Layer 3-based architectures are done by switches used in this layerby satisfying the server broadcast domains )e DC access

2 Complexity

layer provides an opportunity to program a switch so thatonly particular systems on the network can access the relatedor joined LANs Switch facilitates the nodes on the networkby creating separate collision domains for every singleconnected node to enhance the performance of the networkBy using load balancing data can be moved from onenetwork to another network

In the access layer ToR switches are usually used in everyrack and have 1 GigE or 10 GigE slots connections to thenetwork )ese ToR switches connect to the number ofservers that include the application servers web servers anddata storage servers as shown in Figure 1 )is ToR ar-chitecture minimizes the cabling structure )ese ToRswitches are in greater numbers as compared with theswitches used in the aggregation and the core layer A singleDC involves more than thousands of servers arranged in arack of 20ndash40 servers each [11 12] Servers within each rackare attached to the ToR switch as shown in Figure 2 whichare then further connected to the layers of clusters of

switches in DC)e ToR switches connect to the aggregationlayer switches through various high-speed connections of 10GigE that are clustered to one another in a port channel )epacket travels through the core and aggregation layer Basedon the information in the packet ToR examines and analysesthe packets and fetches the data from the required server Allthe information is encapsulated in the packet [7 13]Generally less than 25 of average network peak load is heldby DCNs and large number of links in DCN stay idle foralmost 70 of the time [14]

)e important issue in DC traffic management is loadbalancing intended as reducing packet delay in internetworkand intranetwork moreover it is also able to redirect theflows by assessing energy-efficient targets )e primaryfactor of energy consumption in DCs is servers and hugenumber of running servers results in higher power con-sumption)e longer the traffic stays in the DCN it not onlyaffects the energy consumption but also affect packets delayHence the optimization of DCN servers utilization results in

`

Campus core

Core

Aggregation 2Aggregation 3

Aggregation 4

DC core

DCaggregation

DCaccess

L2L3

Layer 2 access with clustering andNC teaming

Blade chases with passthru models

Mainframewith OSA

Blade chases withintegrated switch

Layer 3 access with smallbroadcast domain and

isolated servers10Gb EthernetGigabit Ethernet or EtherchannelBackup

Figure 1 DCN layered architecture connected to different servers through ToR

Complexity 3

more efficient energy savings Currently a new hot researcharea in DCs is virtual machines (VMs) placement )roughappropriately placing VMs wrt the network and applica-tions resource requirements the network manager is able tosave the superfluous resources for the other services andmake the DCN much energy-efficient [15] In this paper weare not considering the whole network and placing the VMsWe are just modeling the traffic within the DC network[8 16] VMs placement and task distribution on differentVMs are not considered because we only consider the trafficwithin the DCN (inter and intra DC network traffic)

3 Optical and Hybrid Data Center Architecture

31 OSA )e first design that uses optical switchingtechnology in DCN is named OSA as shown in Figure 3OSA DC architecture is a ldquopurerdquo optical switching networkwhich means that this architecture left the idea of switchesbased on electrical core and uses the idea of optical switchesto build the switching cores [17]

While the ToR switches are electrical so far so totransform and operate into optical-based and electrical-basedsignals among different servers and switching cores In OSAarchitecture the switching core has several connections oneach ToR switch )e arrangement of connections is con-siderably flexible depending on the traffic demand

Meanwhile the architecture remained unable to providethe direct optical links between each rack pair according tothe traffic demands the design built by the controllingsystem creates a connected graph topology and the ToRswitches are liable to deliver the traffic among other ToRswitches )erefore in OSA architecture several opticallinks are provided to each rack [4]

32 c-5rough Wang et al proposed the first hybrid elec-trical-optical network model data center network namedc-)rough It is a three-level multirooted hybrid electricaland optical network model of DCN as shown in Figure 4 asan advancement to the current DCNs c-)rough is alsoknown as HyPaC (hybrid packet and circuit) DCN Inc-)rough architecture ToR switches are linked to bothnetworks made from electrical packet-based networks(Ethernet) and optical circuit-based networks [18]

)is shows that it has two major parts first a tree-likeelectrical network that keeps the connectivity among everypair of switches of ToR and the second is a reconfigurableoptical-circuit network that provides maximum bandwidthamong various rack of servers )e optical switch is con-figured so that each pair of the rack with a high bandwidthrequest is attached with this optical switch Due to the highcost of optical networks and promising maximum band-width between the optical links there is no need to set theoptical links to every single pair of racks increasing the costof the network [19] In c-)rough network the optical partrsquosconfiguration depends upon the traffic among the differenttypes of racks of servers c-)rough architecture evaluatesthe rack-to-rack traffic demand by detecting the socketbufferrsquos habitation Each rack has a single optical link so thewhole architecture is reconfigurable at any time

A traffic monitoring system is required that is positionedin hosts thus it calculates the bandwidth demand with theother hosts in the network )e optical configurationmanager accumulates these measurements plus understandsthe optical switch configuration that depends on the trafficrequirements )e nature of traffic requirements and theattached links are all expressed as maximum weight-perfectmatching problems )at is why in the c-)rough network

Aggregation switch

Networkrack

Rack mountedserver

ToRswitch

Figure 2 ToR switch connection with each server in the rack

4 Complexity

the Edmonds algorithm is used as the key of the maximumweight-perfect matching problem [4] After the opticalswitchesrsquo configuration the optical manager notifies the ToRswitches to route the packet based on information Traffic inToR switches is demultiplexed through virtual local areanetwork (VLAN)-based routing [20]

Two different types of VLAN-based routing are usedbecause of the electrical-optical network model one is forpacket network-based routing and the second for the opticalcircuit (OC) network-based routing [18]

33 Helios In 2010 Farrington et al from UCSD proposedan electrical-optical switched network model for modularDCN as mentioned in [4] which is known as Helios Heliosis the other hybrid network based on the electrical-opticalnetwork model just like the c-)rough design architecturehowever the main difference in that architecture is that it is

based on wavelength division multiplexing (WDM) links[4] as shown in Figure 5

It is a two-level multirooted hybrid electrical and opticalnetwork model of DCN )is hybrid architecture consists ofcore switches and ToR switches known as Pod switches )ePod or ToR switches are electrical packet-based switches Incontrast on the other hand the core switches are either theelectrical packet-based switches or maybe the OC (opticalcircuit)-based switches [19 21])us the electrical packet-basedswitches are all involved in all ToR switches communication

In contrast OC-based switches are employed for varyinglarge bandwidth communication among ToR switches Heliosarchitecture uses electrical packet-based switching to allocatethe networkrsquos erupt type of traffic flow In contrast OC-basedswitched network facilitates slow-changing traffic with highbandwidth Each pod or ToR switch consists of colorlessoptical transceivers andWDM-based optical transceivers)euncolored optical transceivers link the pod switches and thecore electrical packet-based switches Contrary to this opticalmultiplexer multiplexes the WDM optical transceivers thatform the super links attached to the OC switches

)emain disadvantage of this suggested approach is thatit consists of micro-electro-mechanical systems switches(MEMS) therefore the circuit switch wants various times(in milliseconds) in case of any reconfiguration (thereconfiguration time of Glimmerglass switch was 25ms)Hence this approach is perfect for those applications whereconnections among some nodes end at more than a fewseconds to facilitate the reconfiguration overhead [19]Among all the uplinks half of the uplinks are attached toelectrical switches and the remaining half uplinks are at-tached to the optical switches through the optical multi-plexer )ese super links are capable of transmitting w times 10Gigabit per second (w total number of wavelengthsstarting from 1 to 32) [19]

Helios architecturersquos major advantage is that its archi-tecture is built on easily accessible optical transceivers andmodules commonly used in optical telecommunication

Core switch

Aggregateswitch

ToR

Optical circuitswitch

Figure 4 c-)rough architecture

WSS

Optical switching matrix

C C C Fromto another ToRhellip

helliphellip

Coupler

ToR

MUX DEMUX

hellip hellip

hellip

Figure 3 OSA architecture

Complexity 5

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 3: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

layer provides an opportunity to program a switch so thatonly particular systems on the network can access the relatedor joined LANs Switch facilitates the nodes on the networkby creating separate collision domains for every singleconnected node to enhance the performance of the networkBy using load balancing data can be moved from onenetwork to another network

In the access layer ToR switches are usually used in everyrack and have 1 GigE or 10 GigE slots connections to thenetwork )ese ToR switches connect to the number ofservers that include the application servers web servers anddata storage servers as shown in Figure 1 )is ToR ar-chitecture minimizes the cabling structure )ese ToRswitches are in greater numbers as compared with theswitches used in the aggregation and the core layer A singleDC involves more than thousands of servers arranged in arack of 20ndash40 servers each [11 12] Servers within each rackare attached to the ToR switch as shown in Figure 2 whichare then further connected to the layers of clusters of

switches in DC)e ToR switches connect to the aggregationlayer switches through various high-speed connections of 10GigE that are clustered to one another in a port channel )epacket travels through the core and aggregation layer Basedon the information in the packet ToR examines and analysesthe packets and fetches the data from the required server Allthe information is encapsulated in the packet [7 13]Generally less than 25 of average network peak load is heldby DCNs and large number of links in DCN stay idle foralmost 70 of the time [14]

)e important issue in DC traffic management is loadbalancing intended as reducing packet delay in internetworkand intranetwork moreover it is also able to redirect theflows by assessing energy-efficient targets )e primaryfactor of energy consumption in DCs is servers and hugenumber of running servers results in higher power con-sumption)e longer the traffic stays in the DCN it not onlyaffects the energy consumption but also affect packets delayHence the optimization of DCN servers utilization results in

`

Campus core

Core

Aggregation 2Aggregation 3

Aggregation 4

DC core

DCaggregation

DCaccess

L2L3

Layer 2 access with clustering andNC teaming

Blade chases with passthru models

Mainframewith OSA

Blade chases withintegrated switch

Layer 3 access with smallbroadcast domain and

isolated servers10Gb EthernetGigabit Ethernet or EtherchannelBackup

Figure 1 DCN layered architecture connected to different servers through ToR

Complexity 3

more efficient energy savings Currently a new hot researcharea in DCs is virtual machines (VMs) placement )roughappropriately placing VMs wrt the network and applica-tions resource requirements the network manager is able tosave the superfluous resources for the other services andmake the DCN much energy-efficient [15] In this paper weare not considering the whole network and placing the VMsWe are just modeling the traffic within the DC network[8 16] VMs placement and task distribution on differentVMs are not considered because we only consider the trafficwithin the DCN (inter and intra DC network traffic)

3 Optical and Hybrid Data Center Architecture

31 OSA )e first design that uses optical switchingtechnology in DCN is named OSA as shown in Figure 3OSA DC architecture is a ldquopurerdquo optical switching networkwhich means that this architecture left the idea of switchesbased on electrical core and uses the idea of optical switchesto build the switching cores [17]

While the ToR switches are electrical so far so totransform and operate into optical-based and electrical-basedsignals among different servers and switching cores In OSAarchitecture the switching core has several connections oneach ToR switch )e arrangement of connections is con-siderably flexible depending on the traffic demand

Meanwhile the architecture remained unable to providethe direct optical links between each rack pair according tothe traffic demands the design built by the controllingsystem creates a connected graph topology and the ToRswitches are liable to deliver the traffic among other ToRswitches )erefore in OSA architecture several opticallinks are provided to each rack [4]

32 c-5rough Wang et al proposed the first hybrid elec-trical-optical network model data center network namedc-)rough It is a three-level multirooted hybrid electricaland optical network model of DCN as shown in Figure 4 asan advancement to the current DCNs c-)rough is alsoknown as HyPaC (hybrid packet and circuit) DCN Inc-)rough architecture ToR switches are linked to bothnetworks made from electrical packet-based networks(Ethernet) and optical circuit-based networks [18]

)is shows that it has two major parts first a tree-likeelectrical network that keeps the connectivity among everypair of switches of ToR and the second is a reconfigurableoptical-circuit network that provides maximum bandwidthamong various rack of servers )e optical switch is con-figured so that each pair of the rack with a high bandwidthrequest is attached with this optical switch Due to the highcost of optical networks and promising maximum band-width between the optical links there is no need to set theoptical links to every single pair of racks increasing the costof the network [19] In c-)rough network the optical partrsquosconfiguration depends upon the traffic among the differenttypes of racks of servers c-)rough architecture evaluatesthe rack-to-rack traffic demand by detecting the socketbufferrsquos habitation Each rack has a single optical link so thewhole architecture is reconfigurable at any time

A traffic monitoring system is required that is positionedin hosts thus it calculates the bandwidth demand with theother hosts in the network )e optical configurationmanager accumulates these measurements plus understandsthe optical switch configuration that depends on the trafficrequirements )e nature of traffic requirements and theattached links are all expressed as maximum weight-perfectmatching problems )at is why in the c-)rough network

Aggregation switch

Networkrack

Rack mountedserver

ToRswitch

Figure 2 ToR switch connection with each server in the rack

4 Complexity

the Edmonds algorithm is used as the key of the maximumweight-perfect matching problem [4] After the opticalswitchesrsquo configuration the optical manager notifies the ToRswitches to route the packet based on information Traffic inToR switches is demultiplexed through virtual local areanetwork (VLAN)-based routing [20]

Two different types of VLAN-based routing are usedbecause of the electrical-optical network model one is forpacket network-based routing and the second for the opticalcircuit (OC) network-based routing [18]

33 Helios In 2010 Farrington et al from UCSD proposedan electrical-optical switched network model for modularDCN as mentioned in [4] which is known as Helios Heliosis the other hybrid network based on the electrical-opticalnetwork model just like the c-)rough design architecturehowever the main difference in that architecture is that it is

based on wavelength division multiplexing (WDM) links[4] as shown in Figure 5

It is a two-level multirooted hybrid electrical and opticalnetwork model of DCN )is hybrid architecture consists ofcore switches and ToR switches known as Pod switches )ePod or ToR switches are electrical packet-based switches Incontrast on the other hand the core switches are either theelectrical packet-based switches or maybe the OC (opticalcircuit)-based switches [19 21])us the electrical packet-basedswitches are all involved in all ToR switches communication

In contrast OC-based switches are employed for varyinglarge bandwidth communication among ToR switches Heliosarchitecture uses electrical packet-based switching to allocatethe networkrsquos erupt type of traffic flow In contrast OC-basedswitched network facilitates slow-changing traffic with highbandwidth Each pod or ToR switch consists of colorlessoptical transceivers andWDM-based optical transceivers)euncolored optical transceivers link the pod switches and thecore electrical packet-based switches Contrary to this opticalmultiplexer multiplexes the WDM optical transceivers thatform the super links attached to the OC switches

)emain disadvantage of this suggested approach is thatit consists of micro-electro-mechanical systems switches(MEMS) therefore the circuit switch wants various times(in milliseconds) in case of any reconfiguration (thereconfiguration time of Glimmerglass switch was 25ms)Hence this approach is perfect for those applications whereconnections among some nodes end at more than a fewseconds to facilitate the reconfiguration overhead [19]Among all the uplinks half of the uplinks are attached toelectrical switches and the remaining half uplinks are at-tached to the optical switches through the optical multi-plexer )ese super links are capable of transmitting w times 10Gigabit per second (w total number of wavelengthsstarting from 1 to 32) [19]

Helios architecturersquos major advantage is that its archi-tecture is built on easily accessible optical transceivers andmodules commonly used in optical telecommunication

Core switch

Aggregateswitch

ToR

Optical circuitswitch

Figure 4 c-)rough architecture

WSS

Optical switching matrix

C C C Fromto another ToRhellip

helliphellip

Coupler

ToR

MUX DEMUX

hellip hellip

hellip

Figure 3 OSA architecture

Complexity 5

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 4: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

more efficient energy savings Currently a new hot researcharea in DCs is virtual machines (VMs) placement )roughappropriately placing VMs wrt the network and applica-tions resource requirements the network manager is able tosave the superfluous resources for the other services andmake the DCN much energy-efficient [15] In this paper weare not considering the whole network and placing the VMsWe are just modeling the traffic within the DC network[8 16] VMs placement and task distribution on differentVMs are not considered because we only consider the trafficwithin the DCN (inter and intra DC network traffic)

3 Optical and Hybrid Data Center Architecture

31 OSA )e first design that uses optical switchingtechnology in DCN is named OSA as shown in Figure 3OSA DC architecture is a ldquopurerdquo optical switching networkwhich means that this architecture left the idea of switchesbased on electrical core and uses the idea of optical switchesto build the switching cores [17]

While the ToR switches are electrical so far so totransform and operate into optical-based and electrical-basedsignals among different servers and switching cores In OSAarchitecture the switching core has several connections oneach ToR switch )e arrangement of connections is con-siderably flexible depending on the traffic demand

Meanwhile the architecture remained unable to providethe direct optical links between each rack pair according tothe traffic demands the design built by the controllingsystem creates a connected graph topology and the ToRswitches are liable to deliver the traffic among other ToRswitches )erefore in OSA architecture several opticallinks are provided to each rack [4]

32 c-5rough Wang et al proposed the first hybrid elec-trical-optical network model data center network namedc-)rough It is a three-level multirooted hybrid electricaland optical network model of DCN as shown in Figure 4 asan advancement to the current DCNs c-)rough is alsoknown as HyPaC (hybrid packet and circuit) DCN Inc-)rough architecture ToR switches are linked to bothnetworks made from electrical packet-based networks(Ethernet) and optical circuit-based networks [18]

)is shows that it has two major parts first a tree-likeelectrical network that keeps the connectivity among everypair of switches of ToR and the second is a reconfigurableoptical-circuit network that provides maximum bandwidthamong various rack of servers )e optical switch is con-figured so that each pair of the rack with a high bandwidthrequest is attached with this optical switch Due to the highcost of optical networks and promising maximum band-width between the optical links there is no need to set theoptical links to every single pair of racks increasing the costof the network [19] In c-)rough network the optical partrsquosconfiguration depends upon the traffic among the differenttypes of racks of servers c-)rough architecture evaluatesthe rack-to-rack traffic demand by detecting the socketbufferrsquos habitation Each rack has a single optical link so thewhole architecture is reconfigurable at any time

A traffic monitoring system is required that is positionedin hosts thus it calculates the bandwidth demand with theother hosts in the network )e optical configurationmanager accumulates these measurements plus understandsthe optical switch configuration that depends on the trafficrequirements )e nature of traffic requirements and theattached links are all expressed as maximum weight-perfectmatching problems )at is why in the c-)rough network

Aggregation switch

Networkrack

Rack mountedserver

ToRswitch

Figure 2 ToR switch connection with each server in the rack

4 Complexity

the Edmonds algorithm is used as the key of the maximumweight-perfect matching problem [4] After the opticalswitchesrsquo configuration the optical manager notifies the ToRswitches to route the packet based on information Traffic inToR switches is demultiplexed through virtual local areanetwork (VLAN)-based routing [20]

Two different types of VLAN-based routing are usedbecause of the electrical-optical network model one is forpacket network-based routing and the second for the opticalcircuit (OC) network-based routing [18]

33 Helios In 2010 Farrington et al from UCSD proposedan electrical-optical switched network model for modularDCN as mentioned in [4] which is known as Helios Heliosis the other hybrid network based on the electrical-opticalnetwork model just like the c-)rough design architecturehowever the main difference in that architecture is that it is

based on wavelength division multiplexing (WDM) links[4] as shown in Figure 5

It is a two-level multirooted hybrid electrical and opticalnetwork model of DCN )is hybrid architecture consists ofcore switches and ToR switches known as Pod switches )ePod or ToR switches are electrical packet-based switches Incontrast on the other hand the core switches are either theelectrical packet-based switches or maybe the OC (opticalcircuit)-based switches [19 21])us the electrical packet-basedswitches are all involved in all ToR switches communication

In contrast OC-based switches are employed for varyinglarge bandwidth communication among ToR switches Heliosarchitecture uses electrical packet-based switching to allocatethe networkrsquos erupt type of traffic flow In contrast OC-basedswitched network facilitates slow-changing traffic with highbandwidth Each pod or ToR switch consists of colorlessoptical transceivers andWDM-based optical transceivers)euncolored optical transceivers link the pod switches and thecore electrical packet-based switches Contrary to this opticalmultiplexer multiplexes the WDM optical transceivers thatform the super links attached to the OC switches

)emain disadvantage of this suggested approach is thatit consists of micro-electro-mechanical systems switches(MEMS) therefore the circuit switch wants various times(in milliseconds) in case of any reconfiguration (thereconfiguration time of Glimmerglass switch was 25ms)Hence this approach is perfect for those applications whereconnections among some nodes end at more than a fewseconds to facilitate the reconfiguration overhead [19]Among all the uplinks half of the uplinks are attached toelectrical switches and the remaining half uplinks are at-tached to the optical switches through the optical multi-plexer )ese super links are capable of transmitting w times 10Gigabit per second (w total number of wavelengthsstarting from 1 to 32) [19]

Helios architecturersquos major advantage is that its archi-tecture is built on easily accessible optical transceivers andmodules commonly used in optical telecommunication

Core switch

Aggregateswitch

ToR

Optical circuitswitch

Figure 4 c-)rough architecture

WSS

Optical switching matrix

C C C Fromto another ToRhellip

helliphellip

Coupler

ToR

MUX DEMUX

hellip hellip

hellip

Figure 3 OSA architecture

Complexity 5

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 5: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

the Edmonds algorithm is used as the key of the maximumweight-perfect matching problem [4] After the opticalswitchesrsquo configuration the optical manager notifies the ToRswitches to route the packet based on information Traffic inToR switches is demultiplexed through virtual local areanetwork (VLAN)-based routing [20]

Two different types of VLAN-based routing are usedbecause of the electrical-optical network model one is forpacket network-based routing and the second for the opticalcircuit (OC) network-based routing [18]

33 Helios In 2010 Farrington et al from UCSD proposedan electrical-optical switched network model for modularDCN as mentioned in [4] which is known as Helios Heliosis the other hybrid network based on the electrical-opticalnetwork model just like the c-)rough design architecturehowever the main difference in that architecture is that it is

based on wavelength division multiplexing (WDM) links[4] as shown in Figure 5

It is a two-level multirooted hybrid electrical and opticalnetwork model of DCN )is hybrid architecture consists ofcore switches and ToR switches known as Pod switches )ePod or ToR switches are electrical packet-based switches Incontrast on the other hand the core switches are either theelectrical packet-based switches or maybe the OC (opticalcircuit)-based switches [19 21])us the electrical packet-basedswitches are all involved in all ToR switches communication

In contrast OC-based switches are employed for varyinglarge bandwidth communication among ToR switches Heliosarchitecture uses electrical packet-based switching to allocatethe networkrsquos erupt type of traffic flow In contrast OC-basedswitched network facilitates slow-changing traffic with highbandwidth Each pod or ToR switch consists of colorlessoptical transceivers andWDM-based optical transceivers)euncolored optical transceivers link the pod switches and thecore electrical packet-based switches Contrary to this opticalmultiplexer multiplexes the WDM optical transceivers thatform the super links attached to the OC switches

)emain disadvantage of this suggested approach is thatit consists of micro-electro-mechanical systems switches(MEMS) therefore the circuit switch wants various times(in milliseconds) in case of any reconfiguration (thereconfiguration time of Glimmerglass switch was 25ms)Hence this approach is perfect for those applications whereconnections among some nodes end at more than a fewseconds to facilitate the reconfiguration overhead [19]Among all the uplinks half of the uplinks are attached toelectrical switches and the remaining half uplinks are at-tached to the optical switches through the optical multi-plexer )ese super links are capable of transmitting w times 10Gigabit per second (w total number of wavelengthsstarting from 1 to 32) [19]

Helios architecturersquos major advantage is that its archi-tecture is built on easily accessible optical transceivers andmodules commonly used in optical telecommunication

Core switch

Aggregateswitch

ToR

Optical circuitswitch

Figure 4 c-)rough architecture

WSS

Optical switching matrix

C C C Fromto another ToRhellip

helliphellip

Coupler

ToR

MUX DEMUX

hellip hellip

hellip

Figure 3 OSA architecture

Complexity 5

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 6: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

networks [19] )e OC switch consists of Glimmerglass-based switches and ToR switches use the WDMSFP+ optical transceivers )ese transceivers of WDM areeither denseWDMmodules that have great cost and supportgreater ranges of wavelengths or coarse WDM that have lesscost and support fewer wavelengths and use a broad spacingof channel (20 nm channels throughout 1270 nm to 1630 nmC-band) Coarse WDM (CWDM) laser does not demandcostly temperature stabilization Glimmerglass crossbar OCswitch is used to calculate performances that are easilyavailable and support equal to 64 ports )e authors in [22]introduced a new hybrid DCN model named TIO that re-jects the previous opticalelectrical switching scheme be-tween racks and introduces a novel technique called visiblelight communication (VLC) between racks It combinedboth wireless-based VLC Jellyfish architecture and wired-based EPS Fat Tree

4 Traffic Modeling

Stimulations are performed to determine the traffic behaviorand pattern between aggregate switches and ToR switchesand the network connectivity connected through links Afterperforming several simulations we get the values (readings)of packet arrival rate in aggregate switches And measure theToR switches performance on specific parameters iepacket size link capacity and packet generate rate )eingress packets are sent from the aggregate switch to ToRswitch After fulfilling its request from the server at a specifictime they came back to aggregate switch through ToRswitch On behalf of the DC parameters we analyze thetraffic behavior on the aggregate switches and the ToRswitches as follows mean waiting time-up mean waiting fortime-down blocking-up and blocking-down [20] In DCNburstiness of traffic has two different aspects either thetraffic burstiness per flow could be extremely bursty or the

flow arrival itself could be bursty too )e intensity ofburstiness of traffic may change corresponding to the pointwhere the traffic is measured that is at end-point ports or atToR switches Because a variety of applications share the DCinfrastructure a combination of numerous flow types andsizes are produced Flows may or may not have deadlinelimitations and may be short latency sensitive or largethroughput oriented Like some particular applications flowsizes are unidentified )e brief summary of DC trafficcontrol challenges is in [23 24]

41Load )epackets that arrived from the aggregate switchto ToR switch are through a link of 10GbE )is arrival ofpackets between the switches through a link decides the loadon the networkrsquos network and performance In our basic treemodel the alternative path between ToR and ToR is throughthe aggregate switch On the other hand the alternative pathin c-)rough in through OCS manager is an optical link Sowe calculate the load ρ by applying the formula

ρ E Lp1113960 1113961

Ctimes

Na

ttr

(1)

Here E[Lp] is the average packet size C is the link ca-pacity Na is the total number of packets arrived and ttr is thetotal transmission time Contrary to this load on the down-ward traffic is denoted as ρd and can be calculated as follows

ρd E Lpd1113960 1113961

Ctimes

Nad

ttr

(2)

Here E[Lpd] is the average packet size-down Nad is thetotal number of downward packets arrived and ttr is thetotal transmission time

While on the other hand the packets that are consideredas downward traffic are calculated as load-up ρu

ρd E Lpu1113960 1113961

Ctimes

Nau

ttr

(3)

Here E[Lpu] is the upward average packet and Nau hasthe total number of upward packets arrived

42 MeanWaiting Time On behalf of the load and averagepacket size E[Lp] we calculate the mean waiting time ofboth types of ingress and egress traffic

43 Blocking Blocking is defined as the number of packetsdropped Nd on a particular link during the transmission ofpackets So we calculate the blocking probability P b at aparticular link by the following formula

P b Nd

Na

(4)

Here Na is the total number of packets arrived and Nd isthe number of packets dropped on a particular link )enumber of packets dropped on the upward traffic Ndu iscalculated as follows

Optical circuitswitch

Electrical packetswitches

ToR switches (pod switches)

10GbpsWDM optical link

Figure 5 Helios architecture

6 Complexity

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 7: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

Pbu PDu

Nau

(5)

Here Ndu is the total number of packets dropped onupward traffic and Nau represents the rate of arrival totalnumber of upward packets While on the other hand thepackets dropped on the downward traffic Ndd is calculatedas follows

Pbd Ndd

Nad

(6)

Here Ndd is the total number of packets dropped ondownward traffic andNad represents the rate of arrival of thetotal number of downward packets

5 Simulation Structure

)ere are several open-source simulators to analyze theperformance and characteristics of the DC and also provideextra features of cloud computing and VM migration Someopen-source simulators are discussed here that are freelydownloadable Table 1 shows the comparison between open-source cloud simulators

51 CloudSim )e idea of cloud computing introduces acompletely new mode of computing technology withpromising assurance to provide sustainable secure fault-tolerant and reliable services which are introduced assoftware and infrastructure as a service (SaaS and IaaS) [25]To ensure these factors a Java-based simulation model isdeveloped at CLOUDS Laboratory University of Mel-bourne Australia called CloudSim It is a well-knownsimulator for analyzing large-scale cloud DC parametersResearchers used this simulation framework to analyze thecloud infrastructure and explore application servicesrsquo per-formance in a controlled environment It provides basicclasses for describing users VMs computational resourcesDCs and many other user and management level controls ofinfrastructure )e key drawback of the CloudSim simulatoris the lack of graphical user interface (GUI) [26]

52 GreenCloud GreenCloud is another open-source sim-ulator and it is an extension of network simulator NS2 )issimulator is based on CloudSim)is open-source simulatorallows the researchers and users to interact watch andcontrol the DCs and cloudsrsquo performance )e green cloudcomputing revolution developed this simulation tool andgave an option to examine the energy efficiency of the cloudDCs It also provides information about the DC equipmentabout their energy consumption [26] )is simulationframework distinguishes the energy consumption partsrelated to the infrastructure and computing energy of DC[25] GreenCloud simulator is also used to build up newscheduling workload scenarios allocating resources andoptimizing network infrastructure and communicationprotocols It majorly focuses on energy consumption issuesof the DCN )e major limitation of GreenCloud is that itworks only with small DCs

53 EMUSIM )is tool has an integrated architecture ofemulation and simulation)is simulator framework is builton AEF (automated emulation framework) technology foremulation and used CloudSim architecture for simulation Itcontains both emulation and simulation techniques in asingle package )is tool is used to design an environmentnear-real computing patterns and resources EMUSIMsimulator does not need lower-level details like the locationof VM and total VM per host in a given time )e drawbackof the EMUSIM is that it is not scalable due to manyhardware restrictions or issues in producing real largenetworks

54 iCanCloud iCanCloud simulator is mainly used todetermine the connection between the cost and thethroughput of a given set of applications on a specifichardware configuration )is simulator is built on SIMCANsimulator [27] a simulation tool used to investigate the high-performance IO architectures )is simulator gives infor-mation about the specific parameters of DCN [26]

55 IBKSim For the simulation modeling of two differentDC architectures a simulator is designed to analyze theDCN parameters bandwidth latency and throughputIBNSim is a network simulator developed in 2004 by usingan object-oriented design model With time various newprotocols were added to this simulator for advanced researchpurposes in the network field IBKSim is the current sim-ulator edition for research purposes and detailed in [28])is simulator can simulate small queuing models and judgethe performance of the single and big network nodes It is aflexible simulator and uses the object-oriented designmethodology in C++ development

6 Simulation Setup

After developing our simulator two different layered-basedDC architectures were built first is Ethernet-based calledbasic tree DC architecture and the other is hybrid archi-tecture called c-)rough architecture Both architecturesrsquodesign is almost the same as one core switch connected withthree aggregate switches through a 10GbE link )en eachaggregate switch is attached with two ToR switches with10GbE links and each ToR is connected with exactly onerack)e difference in both architectures is that in basic treearchitecture the link between the aggregate switches is40GbE which is not present in c-)rough architecture Andthe other difference is that in c-through architecture ToRsare connected with an optical circuit switch (OCS) with a10Gbps link that makes it hybrid

61 Simulation inBasicTree When the simulation starts thesource generates packets and these packets are transmittedto the core switch )e shortest path algorithm is used forrouting in the core switch to minimize the network loadAfter receiving the packets from the source the core nodetransmits packets to the intended aggregate switch based on

Complexity 7

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 8: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

the routing table defined in the core switch )e flow chartshows how traffic travels in the DC network simulator thatwe developed as shown in Figure 6

Packets are arranged in the queues when arrived in theaggregate switch Aggregate switch takes the routing in-formation from the core switch and then forwarded thepackets to the ToR switch according to the queue )e ToRswitch checks the packet internal information that furthersent to the rack if the packet looking for is found in the serverif not then the packet travels back from ToR to the aggregateswitch )is aggregate switch sends to another aggregateswitch for the intended server as shown in Figure 7 )emaximum traffic in this type of DC architecture in betweenthe TORs and the aggregate switches is compared with thehybrid architecture Simulation ends until either the inputsimulation time is completed or the number of times thespecific event has occurred

62 Simulation in c-5rough When the simulation startsthe source generates packets and these packets are trans-mitted to the core switch)e shortest path algorithm is usedfor routing in the core switch to minimize the network load)e flow chart shows how traffic travels in the DC networksimulator that we developed as shown in Figure 8

After receiving the packets from the source the corenode transmits packets to the intended aggregate switchbased on the routing table defined in the core switch Packetsare arranged in the queues when arrived at the aggregateswitch Aggregate switch takes the routing information fromthe core switch and then forwarded the packets to the ToRswitch according to the queue )e ToR switch checks thepacketrsquos internal information which further forwarded to therack if the information which the packet looking for is foundin the server if not then the packet EOE (electrical-optical-electrical) conversion occurs and the electrical packet isconverted into optical and forwarded to optical circuitswitch After that the optical packet is again converted fromoptical to electrical and transmitted to another alternativeToR as shown in Figure 9 )e simulation ends until eitherthe input simulation time completes or the specific eventoccurs

63 Simulation Parameters At the start of the simulationthe source generates the 2000 packets increasing with themultiple of 2 per simulation and sending it to the coreswitch )e core switch uniformly distributes the packetsand transmits them to the aggregate switch through a link of

10GbE Packets travel from aggregate switch to ToR switchthrough a link of 10GbE Racks are connected to TORsthrough a link of 10GbE In c-)rough DC architecture anoptical link of 10Gbps is used between two TORs throughOCS while on the other hand in basic tree architecture thelink between the two aggregate switches is 40GbE)emeanpacket size in our simulation is 0001953125 and at eachswitch or node the maximum packet limit of the queue is1000

7 Results and Discussion

In this graph mean waiting time-down means the flow oftraffic moving downward from the core to aggregate andfrom ToR to rack or OCS whereas load-down represents aload of downward traffic on a link )e graph shows thebehavior of downward moving traffic against the load

)e graph shown in Figure 10 represents the behaviorof downward traffic on the aggregate layer In the graphthe dotted lines of the tree-based architecture show thatthe mean waiting time-down and the load-down increaseson an aggregate layer as compared with the c-)rough

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Packet generation

Sink

Dropsink

Yes

Yes

Figure 6 Flow chart of basic tree DCN simulation

Table 1 Comparison between open-source cloud simulators

Parameters CloudSim GreenCloud EMUSIM iCanCloud IBKSimGUI support NA Limited support Limited support Full GUI support Limited supportLanguage Java C++ Otcl Java OMNet MPL C++ C++ XMLPlatform GridSim NS2 AEF CloudSim SIMCAN NAPower efficiency Limited Yes Yes Limited LimitedSimulation time Seconds Tens of minutes Tens of minutes Tens of minutes Depend upon network sizeMulti DC comparisons No No No No AvailableSupport for parallel experiments No No No Available Available

8 Complexity

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 9: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

hybrid architecture because in tree architecture thepackets arrive at the aggregate switch from the core andother aggregate switches Packets are arranged in queues)e more the packets come the more the previous packetshave to wait in the queue which increases the meanwaiting time of the downward traffic Mean waiting time-down increases more rapidly in tree architecture thanhybrid architecture Hybrid architecture increases lessgradually and shows more consistent behavior

While on the other hand the graph in Figure 11 shows thebehavior of downward traffic in ToR layer )e dotted lines ofthe tree architecture show that the mean waiting for time-down and the load-down increases with a much higher rate

on ToR layer as compared with the c-)rough hybrid ar-chitecture and show inconsistent manners )is is becausetransmission of packets from the aggregate switch despitetreating the packets that are already in the ToR switchrsquos queueincreases the load on the ToR switch Moreover in tree ar-chitecture packet communication between ToRs is throughaggregate switches that also increase the load on ToR switchwhile on the other hand c-)rough has OCS as its alternativeroute among the ToRs communication)at is also a factor ofinconsistent behavior of tree architecture

)e graph in Figures 12 and 13 shows the behavior ofupward traffic on the aggregate and ToR layer In tree ar-chitecture the mean waiting time-up and the load-up in

Core switch

Aggregateswitch

ToR 1 ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Rack 1 Rack 2 Rack 3 Rack 4 Rack 5 Rack 6

Figure 7 Simulated basic tree DC architecture

Start

Source

Core switch

Aggregate switch

ToRswitch

Rack

Routing table

Optical manager

Packet generation

Sink

Dropsink

Yes

Yes

Figure 8 Flow chart of hybrid DCN simulation

Complexity 9

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 10: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

both aggregate and ToR layers increase in an inappropriatemanner At the start it first remains constant in bothswitches and then gradually increases )is gradual increase

in aggregate is due to the reason the traffic between core toaggregate is much less than aggregate to ToR while in thecase of hybrid treating packet rate is greater than tree ar-chitecture and traffic between core to aggregate is greaterthan tree architecture In Figure 13 the graph shows anincrease in a discrepant manner and shows variance be-havior In tree architecture packet communication betweenToRs is through aggregate switches So the communicationamong ToR through aggregate switches increases the meanwaiting time-up and load-up whereas c-)rough architec-ture contains OCS as its alternative route among the ToRscommunication so the mean waiting time-up of the hybridon the graph is a straight line )at is also a factor of in-consistent behavior of tree architecture

In Figures 14 and 15 the graph shows the blocking-downand the load-down in ToR layer and in ToR layer hybridincreases in a more consistent manner than tree architecturewhile in the aggregate layer both layers show the samebehavior and pattern

Core switch

Aggregateswitch 1

ToR 1

Optical circuit switch

ToR 2 ToR 3 ToR 4 ToR 5 ToR 6

Figure 9 Simulated hybrid DC architecturee

HybTree

000E + 00200E ndash 05400E ndash 05600E ndash 05800E ndash 05100E ndash 04120E ndash 04140E ndash 04

Mea

n w

aitin

g tim

e (do

wn)

005 01 015 02 0250Load (down)

Figure 10 Mean waiting time-down of aggregate layer

HybTree

02 04 06 08 1 12 140Load (down)

000E + 00

200E ndash 02

400E ndash 02

600E ndash 02

800E ndash 02

100E ndash 01

120E ndash 01

140E ndash 01

Mea

n w

aitin

g tim

e (do

wn)

Figure 11 Mean waiting time-down of ToR layer

10 Complexity

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 11: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

HybTree

000E + 00100E ndash 02200E ndash 02300E ndash 02400E ndash 02500E ndash 02600E ndash 02700E ndash 02800E ndash 02900E ndash 02

Mea

n w

aitin

g tim

e (up

)

05 1 15 20Load (up)

Figure 12 Mean waiting time-up of aggregate layer

HybTree

01 02 03 04 050Load (up)

000E + 00500E ndash 02100E ndash 01150E ndash 01200E ndash 01250E ndash 01300E ndash 01350E ndash 01400E ndash 01

Mea

n w

aitin

g tim

e (up

)

Figure 13 Mean waiting time-up of ToR layer

HybTree

005 01 015 02 0250Load-down

000E + 00

200E ndash 01

400E ndash 01

600E ndash 01

800E ndash 01

100E + 00

Bloc

king

-dow

n

Figure 14 Blocking-down of aggregate layer

Complexity 11

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 12: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

8 Conclusion

For this intuitive comparison we first develop the tool thatspecifically measures the DC architecture parameters Weexplore each switch component in the layered architectureand its characteristics Furthermore we analyze the loadand blocking and mean waiting for time behavior of ag-gregate and ToR switches in both architectures throughtheir traffic behavior in both switches In the future we canperform modeling on other network parameters throughthis tool We can compare traffic and load parametersc-)rough architecture with other fixed and flexible to-pologies of the DCN)e development of this research toolwill help in the future for further comparisons with otherDC architectures

Data Availability

)e data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

)e authors declare that they have no conflicts of interest

Acknowledgments

)e authors are grateful to the Deanship of Scientific Re-search and King Saud University for funding this researchwork )e authors are also grateful to the Taif UniversityResearchers Supporting Project number (TURSP-2020215)Taif University Taif Saudi Arabia )is work was alsosupported by the Faculty of Computer Science and Infor-mation Technology University of Malaya under Postgrad-uate Research Grant (PG035-2016A)

References

[1] S Basu C McArdle and L P Barry ldquoScalable OCS-basedintrainter data center network with optical ToR switchesrdquo inProceedings of the 2016 18th International Conference onTransparent Optical Networks (ICTON) pp 1ndash4 WarsawPoland July 2016

[2] B Wang Z Qi R Ma H Guan and A V Vasilakos ldquoAsurvey on data center networking for cloud computingrdquoComputer Networks vol 91 pp 528ndash547 2015

[3] L Paraschis ldquoAdvancements in data-center networking andthe importance of optical interconnectionsrdquo in Proceedings ofthe 39th European Conference and Exhibition on OpticalCommunication (ECOC 2013) pp 1ndash3 London UK Sep-tember 2013

[4] N Farrington G Porter S Radhakrishnan et al ldquoHelios ahybrid electricaloptical switch architecture for modular datacentersrdquoACM SIGCOMMComputer Communication Reviewvol 40 pp 339ndash350 2010

[5] S Kumar G Papen K Schmidtke and C Xie ldquoIntra-datacenter interconnects networking and architecturesrdquo inOptical Fiber Telecommunications VII pp 627ndash672 ElsevierAmsterdam Netherlands 2020

[6] N Panahi D Careglio and J S Pareta ldquoA hybrid packetcircuit optical transport architecture for DCNrdquo in Proceedingsof the 2019 21st International Conference on TransparentOptical Networks (ICTON) pp 1ndash4 Angerrs France July2019

[7] A Headquarters ldquoCisco data center infrastructure 25 designguiderdquo Cisco Validated Design vol I 2007

[8] J Zhang F R Yu S Wang T Huang Z Liu and Y LiuldquoLoad balancing in data center networks a surveyrdquo IEEECommunications Surveys amp Tutorials vol 20 no 3pp 2324ndash2352 2018

[9] M Shafiee and J Ghaderi ldquoA simple congestion-aware al-gorithm for load balancing in datacenter networksrdquo IEEEACM Transactions on Networking vol 25 no 6 pp 3670ndash3682 2017

[10] D Li Y Li J Wu S Su and J Yu ldquoESM efficient and scalabledata center multicast routingrdquo IEEEACM Transactions onNetworking vol 20 pp 944ndash955 2011

[11] C F Lam ldquoOptical network technologies for datacenternetworksrdquo in Proceedings of the National Fiber Optic Engi-neers Conference Los Angeles CA USA March 2010

[12] C Kachris K Kanonakis and I Tomkos ldquoOptical inter-connection networks in data centers recent trends and futurechallengesrdquo IEEE Communications Magazine vol 51 no 9pp 39ndash45 2013

[13] Cisco Classic Network Design Using Cisco Nexus 9000 SeriesSwitches Cisco San Francisco CA USA 2015

[14] K Bilal S U R Malik O Khalid et al ldquoA taxonomy andsurvey on green data center networksrdquo Future GenerationComputer Systems vol 36 pp 189ndash208 2014

[15] D Belabed S Secci G Pujolle and D Medhi ldquoStriking abalance between traffic engineering and energy efficiency invirtual machine placementrdquo IEEE Transactions on Networkand Service Management vol 12 no 2 pp 202ndash216 2015

[16] H Goudarzi and M Pedram ldquoEnergy-efficient virtual ma-chine replication and placement in a cloud computing sys-temrdquo in Proceedings of the 2012 IEEE Fifth InternationalConference on Cloud Computing pp 750ndash757 Honolulu HIUSA June 2012

[17] K Chen A Singla A Singh et al ldquoOSA an optical switchingarchitecture for data center networks with unprecedentedflexibilityrdquo IEEEACM Transactions on Networking vol 22pp 498ndash511 2013

[18] G Wang D G Andersen M Kaminsky et al ldquoc-throughpart-time optics in data centersrdquo in Proceedings of the ACMSIGCOMM 2010 Conference pp 327ndash338 New Delhi IndiaSeptember 2010

HybTree

000E + 00700E ndash 02140E ndash 01210E ndash 01280E ndash 01350E ndash 01420E ndash 01490E ndash 01

Bloc

king

(dow

n)

03 06 09 12 150Load (down)

Figure 15 Blocking-down of ToR layer

12 Complexity

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13

Page 13: Designing of a Simulation Tool for the Performance Analysis of … · 2021. 4. 10. · networks made from electrical packet-based networks (Ethernet)andopticalcircuit-basednetworks[18]

[19] C Kachris and I Tomkos ldquoA survey on optical interconnectsfor data centersrdquo IEEE Communications Surveys amp Tutorialsvol 14 no 4 pp 1021ndash1036 2012

[20] D Wu Designing Hybrid Data Center Networks for HighPerformance and Fault Tolerance Rice University HoustonTX USA 2020

[21] Y Liu J K Muppala M Veeraraghavan D Lin andM Hamdi Data Center Networks Topologies Architecturesand Fault-Tolerance Characteristics Springer Science ampBusiness Media Berlin Germany 2013

[22] Y Qin D Guo G Tang and B Ren ldquoTIO a VLC-enabledhybrid data center network architecturerdquo Tsinghua Scienceand Technology vol 24 no 4 pp 484ndash496 2019

[23] M Noormohammadpour and C S Raghavendra ldquoDatacentertraffic control understanding techniques and tradeoffsrdquo IEEECommunications Surveys amp Tutorials vol 20 pp 1492ndash15252017

[24] A Saeed N Dukkipati V Valancius V )e LamC Contavalli and A Vahdat ldquoCarousel scalable trafficshaping at end hostsrdquo in Proceedings of the Conference of theACM Special Interest Group on Data Communicationpp 404ndash417 Los Angeles CA USA August 2017

[25] U Sinha and M Shekhar ldquoComparison of various cloudsimulation tools available in cloud computingrdquo InternationalJournal of Advanced Research in Computer and Communi-cation Engineering vol 4 pp 171ndash176 2015

[26] M S Mandeep and S Bhatia ldquoA critical review amp analysis ofcloud computing simulatorsrdquo IJLTET pp 29ndash36 2016

[27] R Malhotra and P Jain ldquoStudy and comparison of variouscloud simulators available in the cloud computingrdquo Inter-national Journal of Advanced Research in Computer Scienceand Software Engineering vol 3 pp 347ndash350 2013

[28] L Wallentin M Happenhofer C Egger and J Fabini ldquoXmlmeets simulation concepts and architecture of the ibksimnetwork simulatorrdquo SNE vol 41 2010

Complexity 13