Open In App

AWS Infrastructure

Last Updated : 22 Jul, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Amazon Web Services provides the most extensive global footprint compared to any other cloud providers in the market, and it opens up new regions faster than others. AWS maintains numerous geographic regions around the globe, from North America, South America, Europe, Asia Pacific, and the Middle East. AWS serves a million active customers in more than 190 countries.

AWS can support this massive workload, Global Cloud Infrastructure which consists of Availability Zones, Regions, and Edge Networks. The AWS Global Cloud Infrastructure is the most secure, extensive, and reliable cloud platform in the industry today, which offers a wide range of cloud service offerings.

AWS is the top choice of small and medium enterprises for deploying their application workloads across the globe and for distributing content closer to their end-users with low latency. It provides you with a highly available and fault-tolerant cloud infrastructure where and when you need it. AWS owns and operates thousands of servers and networking devices that are running in various data centers, scattered around the globe.

Components of AWS Infrastructure

Key Terminologies

1. Data Center

A data center is a physical facility that houses hundreds of computer systems, network devices, and storage appliances. We can run our applications in two or more data centers to achieve high availability, so if there is an outage in one of the data centers, we still have other servers running in another data center. A data center can also deliver cached content to your global end-users to improve response times. At its core, the AWS Global Infrastructure utilizes multiple data centers and group them into Availability Zones, Regions, and Edge Locations.

2. Availability Zone (AZ)

AZs are physically separated data centers with redundant power networking and connectivity. So each AZ is a logical group of data centers which can be one or more physical data centers. They can be in separate buildings or locations. They are built with redundancies. There is high throughput low latency networking between these AZs in a region. All traffic between these AZs is encrypted. Many data centers compose the AWS global infrastructure. Inside the data center, there are thousands of physical servers racks storage and firewalls. Each data center is usually built with redundant power and networking. Each AZ has multiple data centers in each region.

The main reason for having multiple data redundant data centers in an AZ(n) region is for high availability. Many AWS services have built-in high availability, where resources are replicated across multiple AZs in a region. For example, Amazon S3 operates in at least three AZs. Data is protected if one AZ goes down. AWS also gives options to customers to deploy applications in multiple AZs to ensure business continuity in events like a power outage, fires or flood.

3. Point-of-Presence or PoP

The other component of the AWS Global Cloud Infrastructure is the edge networks of Point-of-Presence or PoP. It consists of Edge Locations and Regional Edge Caches, which enables us to distribute our content with low latency to our global users.

Basically, a PoP serves as an access point that allows two different networks to communicate with each other. By using these global edge networks, a user request doesn’t need to travel far back to your origin just to fetch data. The cached contents can quickly be retrieved from regional edge caches that are closer to your end users. This is also referred to as a Content Delivery Network or CDN. So for example, we have high-resolution images stored on a server in California. We can cache these media files to an edge location in the Philippines, India, or Singapore to allow our customers in Asia to retrieve these photos faster. The images will be loaded quickly because it is fetched to an edge server near our users, instead of retrieving it from the origin server in California.

4. Region:

When we use AWS console CLI or SDK to manage AWS resources the first thing we need to do is to choose a region. The resources we created in one region are only visible in that region there are a few considerations when we choose a region. First of all, we may want to choose a region that is close to our users for the lowest latency. For example, if the majority of your users are in the US we may want to choose a US region.

The second reason is compliance and regulatory requirements. Certain laws mandate that certain data must be stored in particular countries. For example, if our organization is operating highly sensitive data for the US government we should consider the GovCloud. Some resources or services are only available in certain regions new services are usually launched in the U.S. east northern Virginia region first. Sometimes it takes a long time to roll out to other regions. For example, Alexa for business is only available in this region at the moment however this doesn’t mean your end user can’t access the application created by the service. It just means we can only create and manage the service in this region each region may have different prices for AWS services. For example, our EC2 instances or data in S3 buckets may be charged a different price in Singapore than in the US. Keep in mind that AWS charges data transfer between regions.

region.png

5. Edge Locations

At the moment, edge locations are part of the AWS Content Delivery Network for low latency high throughput content delivery. Edge locations are all over the world close to the users. They leverage Amazon’s ultra-fast global network backbone to deliver data and cache them in a location that is close to the users. Services that use edge locations are Amazon Cloudfront and Lambda Edge Cloudfront is the AWS global CDN for caching dynamic or static content lambda edges the edge computing to run code on low latency computing resources. We only pay as we go with no minimum upfront cost data transfer from AWS origins such as Amazon S3 EC2 and Elastic Load Balancing to the edge location are free. We only pay for the data transferred out of the edge location.

For example, if we are running a photo-sharing website that stores images in an S3 bucket in the US most of our end users are in Singapore, the first time users download images images are delivered from the S3 bucket in the US through the Cloudfront network and cached in the edge location in Singapore. Later on, other users in Singapore will download them from the edge location instead. Without edge locations, these contents will always have to travel from the origin to the end user. We don’t need to pay for the data transfer between S3 and the edge location it’s much cheaper than sending data from S3 to our users directly.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads