S.1 Technology Description
A Local Area Network (LAN) is the network of computers, which are within the same local area. LAN has a number of disadvantages since it is very slow in processing. To help adjust to this problem, there was need to develop an alternative. This is the Virtual Area Local Network, VLAN, which offers solution to the use of routers with the broadcast traffic. Traditionally, the operators were using a hub or a repeater to connect the working stations, which use LAN. The devices were important in the propagation of the data coming in through the network. The attempt of the two people on the different ends to send information at the same time would cause a collision, which would subsequently end up losing the whole data under transmission. The hubs or the repeaters will continue to transmit the collision throughout the system once it occurs.

There will always be a resolution of the problem after some time. Afterwards, the process of resending the information takes place although this records a great loss in term of tie and resources. Researchers have developed ways of adjusting to the issue of collision. For this reason, the system ensures that the collisions do not travel throughout workstations. In the event, a bridge or a switch is applicable and actually, helps the system adjusts to the problem. The result is that the devices will announce to the network users of the occurrence rather than forwarding the collisions to them. The case is a little bit different for the multicasts whose use is special, although they will also receive the broadcasts. A further measure of remedy is the use of the router, which is able to prevent the broadcasts and multicasts from moving across the whole system.

S.2 Standards of VLANs

Establishment of standards is a special way of providing security to the VLAN against attacks. This ensures that the confidential information remains so and that unintended person have no access to it. For this reason, the technology governing the VLAN came up with a number of standards and which have largely maintained security in the system of the technology.

One example of such a networking standard is the IEEE 802.1Q. This standard helps to support the virtual LANs (VLANs) on networks such as Ethernet. The standard refers to a collection of tagging for Ethernet frames and the procedures that would guide the bridges as well as the switches intended for handling. These procedures accompany the given frames. The standard also provides the system with the IEEE 802.1p which is the scheme of the prioritization of the quality of service. This contains all the necessary information about the Generic Attribute Registration Protocol.

Some parts of the network are VLAN-aware and they contain the VLAN tags. In other words, they are IEEE 802.1Q conformant. The VLAN-aware portion will periodically receive frames in the event of which a tag adds to clarify a change in the membership. The use e of the VLAN classification determines the protocol or port of the frame. It is necessary for each frame to have a specific VLAN to enable identification from the others. For this reason, the rather considers only the portions containing only the VLAN tag but assumes those, which lack it. They assume these strange ones based on being default or native.

A working group of IEEE 802 referred to as the IEEE 802.1, developed this standard, and actually revised it to meet the intended use. They revised it to many versions including the 802.1Q-2014 whose inc. was IEEE 802.1aq or (shortest path bridging) that was according to the 802.1D standard. In 2012, David Allan and Nigel Bragg revealed that in 802.1aq Shortest Path Bridging Design and Evolution: The Architect’s Perspective that the shortest path had the ability of connecting Ethernet’s history.

Another important standard is the VMware standard, which is able to work towards solving a number of problems within the VLANs system. This type of standard through its switches provides security to the VLANs following their special design since the operators are expecting attacks of all types. This enable the VLANs adjusts to the numerous problems. However, the fact that the VLANs may possess this kind of protection does not guarantee the virtual machines to be invulnerable to the other possible attacks. Among the attacks that that the switches are not able to adjust to are as below.

There is the MAC flooding which floods the strange packets. These strange packets MAC contain addresses whose origins are from different sources. Most of these switches use a memory table that can address content or each packet received. The table will automatically fill up in which there is a possibility of the switch entering the state, which is fully open. In these states, broadcasts of the packets on any possible ports take place hence the attackers are able to see the traffic of the switch. In the end, across the VLANs, many packets will leak. The only limitation of the VMware standard is that it is unable to traffic observer’s addresses hence it is a frequent victim of the attack.

Occur when an attacker creates a double-encapsulated packet in which the VLAN identifier in the inner tag is different from the VLAN identifier in the outer tag. For backward compatibility, native VLANs strip the outer tag from transmitted packets unless configured to do otherwise.

When a native VLAN switch strips the outer tag, only the inner tag is left, and that inner tag routes the packet to a different VLAN than the one identified in the now-missing outer tag.

VMware standard switches drop any double-encapsulated frames that a virtual machine attempts to send on a port configured for a specific VLAN. Therefore, they are not vulnerable to this type of attack.

Another problem associated with the inability of the protection of the standard is the Multicast brute-force attacks. The switches are still unable to adjust to this problem and therefore such attacks related to it may persist in case of an occurrence. The switches therefore prove to continue to be vulnerable to the attack among the others. This section reveals the importance on creating standards as well coming up with others to help protect the system. It also show that there a number of problems whose attacks persist rather than responding to the standards upon application.

S.3 Implementation/Design

S.3 .1 Performances

The VLAN is able to reduce the unnecessary destination when there are a large number of broadcasts and multicasts. For instance, a broadcast of 10 users intending to use only five of them and using the remaining on a separate VLAN has a potential of reducing the traffic [Passmore et al (3Com report)]. Routers require processing of so than the switches resulting in the reduction of the performance. The ability of the VLAN to use the switches to create the broadcasts enables to reduce the number of router required.

S.3 .2 Formations of Virtual Workgroups

It is a very common phenomenon to find a development of cross-functional product in today’s world. This is further possible with different departments like marketing, sales, accounting, and research. The formation of these working groups takes place within a short time period during which the work group records a high communication. Setting up a VLAN for the broadcasts and the multicasts is a way of ensuring that they remain within the group. Having the VLAN in control, incorporating members together in a work group is very easy. An alternative, in the absence of the VLAN, is the physical movement of the workgroup members closer together.

However, it is under minimal conditions that the virtual workgroups come without problems. One may consider a situation whereby one user of the workgroup operates from the fourth floor, while the other members on the second floor of the same building. It is automatically that the location of the printers and other resources will be on the second floor hence the fourth floor user will obviously suffer inconveniency largely.

Researchers also associate another problem of the virtual workgroups with an implementation. This is an implementation of the centralized server farms, which include servers and resources. This involves a number of advantages being efficient, security, not a victim of power supply, as well as a proper operating environment. Failure of the centralized server farms to be in more than one VLAN group is a potential problem to the system in which case the operators places it on one single VLAN. [Netreference Inc. article].

S.3 .3) Simplified Administration

Seventy percent of the cost that the network incurs is because of ads, and user alteration in the network [ Buerger]. Reconfiguration of routers and hubs is necessary if there is such an alteration or movement. The reconfiguration could also be unnecessary if the movement occurs within the VLAN. The reduction and the elimination of the administrative work depend on the VLAN type [ Cisco white
paper]. The power of the VLAN also depends on the creation of good management tools. Besides saving, VLANs are incorporating the administrative complexity. [
Passmore et al (3Com report)].

S.3 .4 Securities

Periodically, may provide room for the broadcasting of the sensitive data in which cases it is important to control an outsider from accessing he information. VLAN’s is also a potential way controlling the domains of the broadcast.

S.3. 5. The working mechanism of a VLAN

The process begins when the data from a workstation reaches the VLAN. The explicit tagging of the data follows in which the VLAN identifier indicates the source of the data. There is the possibility of determining the VLAN recipient of the data using the same method of implicit tagging. This method does not involve the tagging of the data but uses the report of the arrival time to determine the source of the VLAN. Tagging is possible when attacked through a number of bases. These include the source of the data, Media Access Control (MAC) field, network address or a collection of the related fields.

It is important to consider the method in question to classify the VLAN’s. A mapping between a VLAN and any given field in question must remain in the updated database of the bridge to make it possible to tag using the different methods. In case the tagging is by port, then the database should identify the VLAN ports. This kind of database is the filtering database whose maintenance should remain which is a responsibility of the bridges. Apart from this, information in the VLAN should resemble those of their databases. According to the operation of the LANs, the next direction where the data is heading to depends on the bridge, which is fully responsible for this purpose. After this, the next step is to and to add and send the VLAN identifier. The (VLAN-aware) which is a VLAN implementation will automatically add the data if it is to go. The bridges may also send the data without the identifier.

To ensure that one understands the working mechanism of the VLAN, a study of the type of the VLANs, connections between devices on VLAN’s, the filtering database, and tagging, is important. Tagging is a process of identifying the VLAN originating the data. This is the most important basis of the understanding of the technology.

S.2 Standards

The issue of standardization is a continuous practice in the cloud virtualization for which there will be no specific standards, in other words it is dynamic as it corresponds to the always-advancing technology. The International Organization for Standardization (ISO) is the body responsible for this and has been able to come up with new standards. In so doing, the ISO tends to confuse a number of people fail to understand some of the standards.

Payment Card Industry Data Security Standard is one of the standards the ISO released in the recent past to make the understanding of the cloud virtualization to be chaos. PCI DSS is a worldwide standard that aids in providing security in the processing of the business cards hence fraud is minimal or no fraud at all. The standard make this possible by controlling the environment governing the details of the data as the business contain them. The main intention of the standard, PCI DSS is to ensure that the data of the cardholder is safe of any kind disclosures to any unintended person. PCI DSS is applicable in cardholder data environments and operate under given serious conditions.

Other standards base their standardization on the frequent modification of the cloud virtualization and computing. According to the National Institute of Standards and Technology (NIST), it is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources…that can be rapidly provisioned and released with minimal management effort or service provider interaction.” This was a version that the NIST created in September 2011. A new definition according to the same body puts it as an “evolving paradigm.” This standard views the concept of cloud virtualization to operate based on diversification. Such a standard enables the business transaction issues to be confidential since an unintended person is unable to follow the patterns and cause fraud.

Still based on standardization, ISO differs with NIST in that it classify the cloud service into some categories, which include, ‘network as a service’ (Naas) and ‘data storage as a service’ (DSaaS) beside expanding the shallow definition according to the NIST by incorporating community clouds; (the ISO/IEC 17788)

S.3 Implementation/Design

The design of the system of cloud virtualization enables it to create a “virtual machine” which is acting as a real computer by the aid of software referred to as “virtual machine manager” (VMM) or “hypervisor.” The general way of operation of the system is by employing a single physical server, which in turn runs a large number of other virtual servers. This is possible because of the installation an operating system to aid the operation of each server, which is the virtual machine. Each virtual machine acts as a guest, which the underlying system hosts. Mostly, a single machine, which is rather real, controls a number of virtual machines, in which the virtual machines change their position with the aid of the VMMs when the condition requires more resources. The implementation of an IaaS takes place in this manner successfully.

The underlying system, the so-called host, needs to undergo some implementations; which is possible through a number of methods. There are many ways to implement a host. The system can implement the VMM through the following methods:

There is assumption that the underlying operating system is rather a tiny one. These include the “VMware sphere (also called “ESXi”) and Xen.”

Using VMM as part of the system, whose example is the Linux’s Kernel-based Virtual Machine (KVM).

An operator can apply the VMM at the top of the operating system. The example of the one used in this method is the Virtual Box.

Scaling up these mechanisms is a potential way a potential way of managing the software. An ancient idea applies in the management of the hypervisors in which they depend on a number of mechanisms, which further depend on Cloud Forms, Red Hat Enterprise Virtualization, and Open Stack. . The term virtualization got its origin from the work by IBM in the 1960s. Virtualization is now valuable due to the contribution of the CPUs and networks. Dilbert of 2008-02-12 is his work used virtualization in hi work, which implies that it is traditional in many societies.

To avoid security issues, it is advisable for the operators to implement the VMMs correctly, to take of the memory shared. To come up with a proper one, the operator should feed in worthless data (usually zeros) prior to handing a new session. Actually, this is the best way of taking care of the problems, which might arise in the course of virtualization. The pages considered as “shared pages” in case they have to be then might be “read-only” or “copy-on-write.” It is very easy to implement the “read only pages” under no or minimal risk. These pages, which may be accessible to the public has the ability of creating Channels in the VMMs, whose conversion is not much involving. (Gunnar Hellekson)

Most operators resort to hardware virtualization rather than containerization to ensure that the system is secure since the field of approaches is very different. A system in which a VMM is capable of evaluation the malfunction in the CPU or the VMM is faulty. [Perez-Botero2013].

Virtualization is not a very crucial condition for the cloud, as many perceive on which note, the system records a decrement. This is evident in some companies that base their computation on “platform-as-a-service.” Almost all about the cloud is likely to change in the near future. Developers should therefore work hard to catch up with the change according to expectation in a few years to come.

The system base the operation of the server on the provision interface, through software, to each virtual server. The examples of these include Central Processing Units, hard drives, CDs, and network interfaces. The virtual servers receive the translation of the I/O from the physical server. The resources that the virtual machines depend on are not specific as they can change the condition demands. An alternative approach apart from changing the source is merely rebooting to avail the new sources. The only challenge emerges when a single system of virtualization fail to run a large memory, wide network band, and many resources. To do away with this problem, a single system should run a maximum of twenty machines. Under loading the system may also emerge if it hosts eight machines and below.

Virtualization is rather a shift new servers are able to operate independent of other physical external forces making it necessary to reduce the related barriers. Apart from this, operators assume them if they appear faulty, and the possibility of reducing the cost incurred

Researches associate the cloud virtualization with a number of disadvantages. The major demerit is that the expenses that one or an organization has to incur to come up with an effective cluster; it demands a lot to initiate. In order to create a standard cloud virtualization focus, it is necessary to purchase very powerful machines which can compute largely with a large RAM’. To ensure that machines recover quickly after any misfortune and move without seam, the designers should apply a san back on them. Management of virtualization requires more attention than a mere a mere physical server posing a very difficult situation on it. It is also very important to note that only limited types of service can undergo cloud virtualization.

S.4 Users and developers

Cloud virtualization is an aspect of networking technology that many companies embrace to use to develop their respective firms. Among the companies, which are applying cloud virtualization include, Microsoft, red hat Amazon, virtual bridges, parallels and proxmox.

The table summarizes how these companies operate based on this technology.





Ø Production of Virtual PC product

Ø non-Linux hypervisor

Ø Red Hat

Ø Release of SPICE protocol

Ø Qumranet


Ø Provides Amazon’s EC2 services.

Ø seamless integration

Ø Virtual Bridges

Ø Virtual desktop infrastructure or VDI.

Ø VERDE whitepaper.

Ø Parallels

Ø OpenVZ project

Ø Linux virtual private servers

Ø Proxmox

Ø OpenVZ.

Ø with Kernel-based Virtual Machine (KVM)

Ø Citrix

Ø cloud vendor software

Ø cloud offerings


[Chandersekaran2011] Chandersekaran, Coimbatore, William R. Simpson, and Ryan Wagner. “High Assurance Challenges for Cloud Computing”. Lecture Notes in Engineering and Computer Science, Proceedings World Congress on Engineering and Computer Science 2011, Volume I, pp. 61-66. Berkeley, CA. October 2011.

[Garfinkel2011]. Garfinkel, Simson. The Cloud Imperative. Technology Review. 2011-10-03.

[Mell2011] Mell, Peter and Timothy Grance. The NIST Definition of Cloud Computing. National Institute of Standards and Technology (NIST). September 2011. NIST SP 800-145.

S.5 Competition

The market of this networking develops through two distinct stages; ebbing and flowing period. These further on the implantation of the system according to the approaches the operator use. On the same note, a number of companies are promoting competition, which poses a lot of problem to many. The companies however embrace this kind of change since they intend to assist the consumers in the end.

Quartz reveals that, Amazon and Google are two victims in the issue of competition in the field of cloud virtualization. In the whole world, the two are emerging to be the greatest cloud providers. Google in the recent past, as a way of perfecting competition, reduce the cost at which it sells its cloud products, the price war, besides incorporating new features. This has enabled Google win the competition between it and its competitors to a given extent.

S.6 Forecast of changes

Clod virtualization is likely to undergo a number of changes in the near future. The networking will record an increase in global data and the traffic of IP, Cisco Global Cloud Index (GCI). The changes revolve around virtualization and cloud computing.

S.6. 1 what will change

Global Data Center Traffic; this will rise up to 8.6 zettabytes which translates to 715 exabytes [EB] per month. Initially, by 2013 it was as low as zettabytes (ZB) per year (255 EB per month). By 2018, that is duration of five years, there will be an overall annual growth in data of 23 percent.

Data Center Virtualization and Cloud Computing Growth; by 2018, will perform roughly 80 percent of the total workloads, workloads will almost be twice (1.9-fold) and tripling the clod ones (2.9-fold), and the density of the workload will rise from 5.2 to 7.5.

Public vs. Private Cloud; by 2018, the cloud workload will assume 69 percent in private from78 percent in 2013. This is among the changes, which are likely to record a decrease.

S.6. 2 Emerging competition from other technologies.

Among the technologies that are in stiff competition with the cloud, virtualization is the Docker. The docker compete the joyent, Canonical containers. It has successfully performed in the platform services provider of the Joyent and Linux. The docker general exceeds the latter in presentation and performance.

S.6. 3 Will other technologies merge it out.

Containers are among the stiff competitors of the cloud virtualization, which are likely to merge it in the near future. It is an IT technology that operates online complimentarily with virtualization hence likely to confuse it.

S.6. 4 New developments expected.

According to the analysts at Gartner, the cloud-based security will grow beyond negligible in the few years to come. The technology is likely to operate faster to take care of the competing companies.


[Garfinkel2011]. Garfinkel, Simson. The Cloud Imperative. Technology Review. 2011-10-03.

[Mell2011] Mell, Peter and Timothy Grance. The NIST Definition of Cloud Computing. National Institute of Standards and Technology (NIST). September 2011. NIST SP 800-145.

[Perez-Botero2013]. Perez-Botero, Diego, Jakub Szefer, and Ruby B. Lee. “Characterizing Hypervisor Vulnerabilities in Cloud Computing Servers”. CloudComputing ‘13, Hangzhou, China. 2013-05-08.

[ProjectAtomic]. Project Atomic. Docker and SELinux.

[RedHat] Red Hat. “Secure Containers with SELinux”.

[VanVleck]. Van Vleck, Tom. History of Project MAC.

[Walsh2014a] Walsh, Daniel J. “Are Docker containers really secure?” 2014-07-22.

[Walsh2014b] Walsh, Daniel J. “Bringing new security features to Docker”. 2014-09-03.

[Walsh2014c] Walsh, Dan. Docker’s New Security Advisories and Untrusted Images. 2014-11-25.

[Xin2015] Xin, Reynold. “World Record set for 100 TB sort…” 2015-01-15.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s