Interested in computer networking and cloud computing? Whether you already work in the industry or are planning to dive in, it can be helpful to know a bit about what cloud computing is, its history, and which companies are big in the sector. It’s also worth learning about the differences between location independent systems and physical networks. Learn more with this computer technician networking specialist’s guide to cloud computing.
What Is Cloud Computing?
From the earliest days of modern computing, one of the biggest questions that users faced was how to make resources available remotely to others and themselves. Processing power, storage, and bandwidth have always been and remain at a premium.
One way to address this demand is to provide highly scalable resources. The class of solutions that emerged to meet demand have become collectively known as cloud computing.
Cloud based resources are generally broken up into layers. From top to bottom, they are
- Software as a Service (SaaS)
- Platform as a Service (PaaS)
- Infrastructure as a Service (IaaS)
SaaS covers top-level software that’s employed by end-users. For example, if you use Microsoft’s Office 360 or Adobe’s Creative Cloud, that’s SaaS in action. The customer is the licensed user who pays a fee, usually a monthly subscription price, to have access to the software.
The PaaS business model involves selling the underlying platform. A company that sets up an Amazon EC2 (Elastic Compute Cloud, a commercial web service for hosting computer applications) instance to host a MySQL database is using PaaS.
The service being sold to the customer is preconfigured software that might include tools like the Linux operating system and Apache web server. Compute nodes for machine learning fit this model, too.
Generally, something similar to a virtual server is configured to provide a set of resources that will include a certain number of computer cores, an amount of RAM, and appropriate storage.
An IaaS business model is built on providing relatively low-level access to cloud computing resources. As funny as it might seem for competitors in the streaming market to do so, Netflix pays Amazon for resources that provide scalability for its movie and TV services.
Guide to How Cloud Computing Works
IaaS represents the layer between physical hardware and virtualized resources. A service provider might have 2,000 blade servers operating in a bank. These resources are then collected into a pool that can be distributed to customers.
Suppose each of the blade servers has two 48-core CPUs installed and each CPU core supported two threads. One machine could provide 192 virtual cores, and a bank of 2,000 such blades could provide 384,000 total virtual cores.
Those cores can then be sold as resources to customers in a variety of ways. A setup like Amazon’s LightSail web hosting system can sell a single core as a resource, or customers can purchase more cores. Notably, this tends to be cheaper than if they purchased a virtual server configured on a physical machine from a traditional hosting company like GoDaddy.
Amazon can also provide whatever resources the customer requires, making it easier for a low-end customer to buy a cloud server for $3.50 a month while an international corporation could purchase millions of dollars of resources on the same system.
Getting the different machines and layers involved in this process to play nicely with each other calls for extensive networking. In fact, at the biggest data storage company on the planet, Google, there is networked infrastructure that does the physical job of supporting other networked infrastructures. Robots are employed to speed up the task of fixing the server banks in the company’s cloud computing infrastructure. Those robots are networked in their own right.
It’s safe to say that the cloud computing sector employs lots of people with backgrounds in networking. Also, the growing demand will require even more people to get everything networked.
History of Cloud Computing Guide
Sharing of computing resources can be traced back to the 1950s. At the time, computing power was incredibly limited in scale and availability worldwide. Only a handful of companies like IBM knew how even to provide the necessary services, and another handful of clients like the U.S. military and big universities could afford to pay for it.
Mainframe computing was the dominant model at the time, and real cloud computing would have to wait for major enabling technology to come along.
What fired up a revolution in resource sharing was the Advanced Research Projects Agency Network, better known as ARPANET. This system is widely recognized as the forerunner of the modern internet. Deployed in 1969, it provided a level of networking that allowed people on one side of a continent to access resources on the other side.
As more users became better networked during the 1970s and 1980s, companies with massive mainframe computing infrastructures started to explore ways to defray their costs.
One way to accomplish this was to sell computing time on their machines. If people wanted to run a simulation, for example, they could just pay for the remote computing time to do it rather than hoping to get access to a physical system. They saved the cost of getting a prohibitively expensive machine, and they also got access to better resources in exchange for a small fee.
Virtualization of resources became common in the 1970s, making it possible to offer a small slice of the resources rather than the whole enchilada. Offerings of virtual private networks to consumers appeared in the 1990s. By the end of the first decade of the twenty-first century, large-scale video streaming services appeared that required massive computational and networking infrastructures.
Eventually, the question of how to provide more and more power to more and more users emerged.
Cluster computing took off in the late 1990s. This entailed building a series of machines together into a single computing resource. From there, it was a relatively simple step to virtualize that resource into chunks that could be scaled up or down to meet the needs of many consumers.
With booms in machine learning and media streaming, the 2010s saw the first markets large enough to justify a massive corporation, such as Google or Amazon, offering IaaS to other multinational companies.
Who Are the Major Players?
If you haven’t guessed by now, the two biggest players in cloud computing are Google and Amazon. A major difference between these two companies is that much of the Google Cloud is still provided to internal customers, meaning Google’s sister companies within the Alphabet holding company are still its primary users.
In Amazon’s case, its customers are pretty much everybody. From small businesses setting up web hosting to pro sports leagues providing live Statcasts during games, tons of customers rely on Amazon’s AWS, EC2, and other cloud computing systems.
If you look to China, you’ll see a logical competitor to Amazon in Alibaba. Just as Amazon grew its original business with an online store before branching out into cloud computing, Alibaba did the same.
Microsoft is also a major player. Its platform, Azure, allows customers access to operating systems, development tools, databases, and frameworks. Microsoft also sells a lot of SaaS products, with Office 360 and Power BI being the centerpieces.
Machine learning is highly dependent upon graphics cards, and that means entrants using GPUs instead of CPUs have appeared. The king of the market for GPUs in machine learning, Nvidia, sells a GPU based system that offers the same sort of flexibility for GPU compute code rather traditional CPU based solutions. This means going a layer lower than IaaS by offering the compute cycle itself as a service that can be utilized by a remote user who wants to run massively parallel calculations.
VMWare and IBM are old holdovers from the days of virtualization. Both have modernized rapidly to deliver cloud computing offerings for storage and computation.
IBM also provides business intelligence tools and computational modeling software. It has shown off its capabilities with the widely recognizable Watson AI system that made a notable public appearance on the TV quiz show Jeopardy, too.
Other holdovers from the golden age of web hosting have sought to catch up with the times. Rackspace is an exemplar of companies reborn in the age of cloud computing. Red Hat, a business heavily grounded in the early days of enterprise Linux, has also found a second life in the cloud computing space.
Guide to Location Independent vs. Physical Networks
Everyone in the computer and networking worlds has an opinion about the cloud. Most of the arguments boil down to the pros and cons of having a physical machine sitting right where you can find it on rack 10 in row 15 versus deploying a virtual machine in a cloud space.
From a networking perspective, the main difference is architectural. Cloud computing pools resources into a single machine that can then be sliced up into instances. A network specialist must worry mostly about how many machines are pooled together into one cloud computing cluster.
Security is also a big deal, especially because large computing resources have become desirable targets for hackers in the age of cryptocurrency mining. Availability can be a concern as anyone who has seen half the internet go down all at once when Amazon Web Services has failed knows.
The pro argument for physical network assets is mostly about control. This is easier to understand when thinking about Software as a Service. While Adobe prefers that you use Photoshop as a monthly subscriber to Creative Cloud, some people will always prefer to have a single-machine license they pay for once and put on their own computer.
Even when the benefits of cloud computing are abundantly clear, such as the Platform as a Service model with virtual web hosting, it’s hard for people to give up on the old ways.
Cloud computing is well established, and the demand for network specialists to support it is robust. Corporations previously grounded in the old model are moving to the cloud, with Microsoft, for example, ceasing new development on Windows in favor of Azure.
To succeed in the field, network technicians will increasingly need to learn how to wire up, deploy, and maintain large banks of machines for the cloud.
Did reading this guide to cloud computing interest you? Ready to start learning more about how to become a computer technician networking specialist? The Computer Technician Networking Specialist program at Hunter Business School is designed to prepare computer networking students for entry-level positions in the fields of electronics, computer technology, and networking. Students build their own computers and use them in the learning process.
Contact us today to find out more about how to become a computer technician networking specialist on Long Island.