AWS Certified Solutions Architect Associate SAA-C03 | Glossary

Another certification I’m working on, the AWS Associate Solution Architect. You can find more about it here: https://aws.amazon.com/certification/certified-solutions-architect-associate/

Here are some topics and terms (I’m calling this Glossary for simplicity) I would like to write down so we can start having a more foundational, basic, understanding of what AWS Solution Architect should know about.

Glossary

What is Amazon EC2?

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides scalable computing capacity in the Amazon Web Services (AWS) cloud. It enables users to launch and manage virtual machines, called instances, with a variety of operating systems and configurations, allowing for flexible and scalable deployment of applications. EC2 is designed to make web-scale cloud computing easier for developers.

EC2 instances can be easily resized, duplicated, or terminated as needed, allowing you to quickly and cost-effectively manage your computing resources. With EC2, you can also take advantage of features such as storage volumes, network interfaces, and security groups to further customize and secure your instances.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html

What is a General Purpose Amazon EC2 instance?

A General Purpose Amazon EC2 instance is a type of virtual machine in the Amazon EC2 cloud computing service. It provides a balanced amount of CPU, memory, and network resources and is suitable for a wide range of applications. The “m5” and “t3” families are examples of General Purpose instances. These instances can be used for a variety of workloads, including web and application servers, small and medium databases, development and testing environments, and many other use cases. General Purpose instances can be easily scaled to meet changing demands, making them a popular choice for many organizations.

https://aws.amazon.com/ec2/instance-types/

What is an EC2 Auto Scaling Group?

An EC2 Auto Scaling Group is a component of the Amazon EC2 Auto Scaling service in Amazon Web Services (AWS). It is used to automatically manage and scale a group of Amazon EC2 instances. An Auto Scaling Group is responsible for ensuring that the desired number of instances is running and available to handle incoming traffic.

With EC2 Auto Scaling, you can set up scaling policies based on criteria such as changes in network traffic or CPU utilization. When demand for your application increases, the Auto Scaling Group will automatically launch new instances to handle the increased load. Similarly, when demand decreases, the Auto Scaling Group will terminate excess instances to save on costs.

This allows you to maintain a consistent and predictable level of performance for your application while maximizing resource utilization and minimizing costs. EC2 Auto Scaling provides a flexible and cost-effective way to manage and scale your Amazon EC2 instances.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html

What is Amazon EC2 Auto Scaling service?

Amazon EC2 Auto Scaling is a service in Amazon Web Services (AWS) that enables automatic scaling for your Amazon Elastic Compute Cloud (EC2) resources. The service automatically adjusts the number of EC2 instances in a group (referred to as an “Auto Scaling Group”) based on user-defined policies and criteria, such as changes in demand for your application.

With EC2 Auto Scaling, you can ensure that your application has the right number of EC2 instances available to handle the incoming traffic. When demand increases, EC2 Auto Scaling launches additional EC2 instances to handle the load. When demand decreases, EC2 Auto Scaling terminates excess instances to save on costs.

This service provides a simple and effective way to maintain a predictable and optimal level of performance for your application while maximizing resource utilization and minimizing costs. EC2 Auto Scaling is integrated with other AWS services, such as Amazon Elastic Load Balancer (ELB) and Amazon CloudWatch, to provide a complete and flexible solution for scaling your applications in the cloud.

https://aws.amazon.com/ec2/autoscaling/#:~:text=Amazon%20EC2%20Auto%20Scaling%20helps,or%20real%2Dtime%20demand%20patterns.

What is Amazon EC2 Auto Scaling cooldown periods?

Amazon EC2 Auto Scaling cooldown periods are a setting in the Amazon EC2 Auto Scaling service that define the time interval after a scaling activity has completed before the next scaling activity can start. The cooldown period helps to ensure that your application has sufficient time to stabilize after a scaling event before another scaling event occurs.

During the cooldown period, EC2 Auto Scaling will not launch or terminate any additional instances, even if the conditions specified in your scaling policies have been met. This helps to prevent rapid and frequent scaling events, which can have a negative impact on the performance and stability of your application.

You can set the cooldown period for an Auto Scaling Group when you create or update the group. The cooldown period is specified in seconds, and you can choose a value that is appropriate for your application’s requirements. In general, longer cooldown periods can help to reduce the frequency of scaling events, while shorter cooldown periods can allow for more frequent scaling to respond to rapidly changing demand.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html

What is a Storage Optimized Amazon EC2 instance?

A Storage Optimized Amazon EC2 instance is a type of virtual machine in the Amazon EC2 cloud computing service that provides high disk throughput and I/O performance. These instances are designed to handle intensive workloads that require frequent access to large amounts of data, such as NoSQL databases, distributed file systems, and data warehousing applications.

Storage Optimized instances provide a high ratio of storage to vCPUs and memory, and offer high-performance local storage options such as NVMe-based SSDs. They also offer low latency and high throughput, making them well-suited for demanding storage workloads.

Examples of Storage Optimized instances include the “d2” and “h1” families. These instances allow you to cost-effectively store and process large amounts of data in the cloud, providing a flexible and scalable solution for your storage needs.

https://aws.amazon.com/blogs/aws/new-storage-optimized-amazon-ec2-instances-im4gn-and-is4gen-powered-by-aws-graviton2-processors/#:~:text=EC2%20storage%2Doptimized%20instances%20are,%2Dvalue%20stores%2C%20and%20more.

What is a Linux-based Amazon EC2 instance on AWS Cloud?

A Linux-based Amazon EC2 instance on AWS Cloud refers to a virtual machine running a Linux operating system in the Amazon Elastic Compute Cloud (EC2) service in Amazon Web Services (AWS). EC2 provides a flexible and scalable way to launch and manage virtual servers in the cloud, and Linux-based instances are a popular choice for many organizations due to the wide range of available open-source software and tools.

With a Linux-based EC2 instance, you can launch a virtual server with your preferred Linux distribution, such as Ubuntu, Amazon Linux, Red Hat Enterprise Linux, or CentOS. You can then install and run any applications and services you need, just as you would on a physical server.

What is Amazon EC2 instance metadata?

Amazon EC2 instance metadata is information about an Amazon Elastic Compute Cloud (EC2) instance that is available from within the instance itself. EC2 instance metadata provides information about the instance, such as its ID, its public hostname, and its Amazon Machine Image (AMI) ID.

Instance metadata can be retrieved by making an HTTP request to a special endpoint, http://169.254.169.254/latest/meta-data/. The information returned by this endpoint is specific to the instance and can be used to configure the instance or to retrieve information needed by scripts and applications running on the instance.

For example, an instance can use the instance metadata to retrieve its Amazon EC2 instance ID and use it as part of a unique identifier. An instance can also use the metadata to retrieve its public hostname, which can be useful for configuring the instance’s network settings.

Instance metadata is available to the instance during its entire lifecycle, so it can be used to configure the instance at launch time, or to retrieve information about the instance at runtime. EC2 instance metadata provides a convenient and flexible way for instances to access information about themselves, making it easier to automate instance configuration and management.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

Which kind of solution is Amazon EC2? IaaS, PaaS, SaaS, FaaS?

Amazon EC2 is a solution in the Infrastructure as a Service (IaaS) category.

IaaS is a cloud computing service model that provides virtualized computing resources, such as virtual machines, storage, and network interfaces, over the internet. With IaaS, customers can rent computing resources on-demand, without having to invest in and maintain physical hardware.

In contrast, Platform as a Service (PaaS) provides a platform for customers to develop, run, and manage their applications, without having to worry about the underlying infrastructure. Software as a Service (SaaS) is a delivery model in which software applications are provided over the internet, on a subscription basis. Function as a Service (FaaS) is a cloud computing model in which functions are executed in response to specific events, without the need to manage the underlying infrastructure.

https://aws.amazon.com/types-of-cloud-computing/

What is Amazon Elastic Block Store (EBS)?

Amazon Elastic Block Store (Amazon EBS) is a block-level storage service provided by Amazon Web Services (AWS) for use with Amazon Elastic Compute Cloud (Amazon EC2) instances. It provides persistent storage for Amazon EC2 instances and can be used as the primary storage for a database, file system, or backup storage.

Amazon EBS provides different volume types to meet different storage performance and cost requirements. You can choose from magnetic volumes for infrequent access, general-purpose SSDs for a balance of cost and performance, and provisioned IOPS SSDs for demanding workloads that require high I/O performance.

Amazon EBS volumes are highly available and durable, and are automatically replicated within an availability zone to protect against data loss. Amazon EBS also provides snapshots, which are point-in-time backups of Amazon EBS volumes that can be used to create new Amazon EBS volumes or to protect data.

Here is a link to the Amazon EBS official documentation: https://aws.amazon.com/ebs/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

What is Amazon S3 Bucket?

Amazon S3 (Simple Storage Service) is an object storage service provided by Amazon Web Services (AWS). An S3 bucket is a container for storing objects in S3.

An S3 bucket can be thought of as a folder that can hold any number of files and other folders. Objects stored in an S3 bucket can be up to 5 terabytes in size and can be any type of data, such as text files, images, videos, and more.

S3 provides a scalable and durable storage solution, and it offers a range of features for data management, such as versioning, lifecycle policies, and access controls. S3 is also designed for high availability and data durability, making it an ideal storage solution for critical data and applications.

S3 can be used for a variety of purposes, including storing backups, storing large unstructured data sets, and serving files to users over the internet. S3 is also commonly used as a back-end for other AWS services, such as Amazon EC2 and Amazon DynamoDB.

https://aws.amazon.com/s3/

What if I want to store more than 5TB of data in a S3 bucket?

If you need to store more than 5TB of data in a single S3 object, you have a few options:

  1. Multipart upload: You can use the Multipart Upload API to upload parts of an object in parallel, which can help you to upload very large objects more efficiently.
  2. Multiple objects: You can store the data across multiple S3 objects by dividing the data into smaller parts. You can use the S3 object naming convention and folder structure to organize these objects and make it easier to work with the data.
  3. Amazon S3 Glacier: If you have data that is infrequently accessed, you can store it in Amazon S3 Glacier, which is a low-cost, long-term archive storage service offered by AWS. You can use S3 Lifecycle policies to automatically transition objects to S3 Glacier as they age, or you can move them manually.
  4. Amazon S3 Intelligent-Tiering: If you have data with unknown or changing access patterns, you can store it in S3 Intelligent-Tiering, which is a new S3 storage class that automatically moves data to the most cost-effective access tier, without performance impact or operational overhead.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html

What is Amazon Multipart Upload for Amazon S3?

Amazon Multipart Upload is a feature of Amazon Simple Storage Service (S3) that enables you to upload large objects, such as videos and images, in parts. Instead of uploading a large object in a single step, you can divide it into parts and upload each part in parallel, which can help you to upload very large objects more efficiently. The parts can be uploaded in any order and in parallel, and then S3 automatically reassembles them into a single object.

This approach has several benefits, such as allowing you to upload objects in parallel, which can increase the overall upload speed and reduce the impact of network latency. Additionally, if the upload is interrupted, you can resume the upload from where it left off, rather than having to start the upload from the beginning.

You can initiate a multipart upload using the AWS Management Console, the AWS CLI, or the S3 API. Once the upload is complete, S3 automatically reassembles the parts and creates a single object from them.

Here is the official AWS documentation on Amazon S3 Multipart Upload: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

What is Amazon S3 Glacier Instant Retrieval?

Amazon S3 Glacier Instant Retrieval is a feature of Amazon S3 Glacier, a low-cost, long-term archive storage service offered by AWS. It allows you to retrieve a portion or all of an archived object within a few minutes, instead of waiting for several hours for a standard retrieval.

Instant Retrieval is useful for accessing data that is needed quickly but infrequently, such as for compliance and auditing purposes. With Instant Retrieval, you can retrieve the data when you need it and avoid the cost of regularly accessing and storing the data in more expensive storage solutions.

To use Instant Retrieval, you simply specify the retrieval option when you initiate a retrieval request, and S3 Glacier will retrieve the data within a few minutes. There are additional charges for using Instant Retrieval, but it can still be more cost-effective than using other storage solutions, especially for infrequently accessed data.

Here is the official AWS documentation on Amazon S3 Glacier Instant Retrieval: https://aws.amazon.com/glacier/instant-retrieval/

What is Amazon S3 Lifecycle policy?

Amazon S3 Lifecycle policy is a feature of Amazon Simple Storage Service (S3) that automatically transitions objects to different storage classes or archives them to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive as they age, based on a set of rules that you define. The lifecycle policy can help you reduce storage costs by automatically moving objects to less expensive storage options as they age, or by automatically deleting them when they are no longer needed.

You can define a lifecycle policy at the bucket level, or for specific objects or prefixes within a bucket. The policy can include one or more rules, each of which defines a transition or deletion action for objects that meet specific conditions, such as their age or the date when they were last modified.

For example, you could define a rule to transition all objects in a bucket to the S3 Standard-Infrequent Access storage class after 30 days, and another rule to transition objects to the S3 One Zone-Infrequent Access storage class after 60 days. After 90 days, you could define a rule to archive objects to S3 Glacier, or after 180 days, you could define a rule to delete objects.

Here is the official AWS documentation on Amazon S3 Lifecycle policy: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

What is a Elastic Load Balancing deregistration process in AWS Cloud?

The Elastic Load Balancer (ELB) deregistration process in the AWS Cloud refers to the process of removing an Amazon Elastic Compute Cloud (EC2) instance from the backend server pool of a load balancer. Deregistration occurs when an instance is terminated, becomes unhealthy, or when you manually deregister it.

The deregistration process is an important aspect of ELB’s automatic instance management, as it helps ensure that traffic is only sent to healthy instances. When an instance is deregistered, ELB stops sending traffic to it and begins distributing incoming traffic to the remaining healthy instances.

In an Auto Scaling group, instances are automatically deregistered and terminated when they are terminated by Auto Scaling, which makes it easier to manage your instances and maintain the desired number of instances in your backend pool.

Here is the official AWS documentation on Elastic Load Balancer deregistration process: https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/deregister-register-instances.html

What is a Provisioned IOPS SSD?

Provisioned IOPS SSD (input/output operations per second) is a storage volume type offered by Amazon Elastic Block Store (EBS) in Amazon Web Services (AWS) that provides high performance and low latency disk I/O for demanding I/O-intensive workloads, such as database applications, big data analytics, and enterprise applications.

With Provisioned IOPS SSD, you can specify the number of IOPS you want to provision for a volume, up to a maximum of 20,000 IOPS. This allows you to achieve consistent and predictable performance for your I/O-intensive workloads, regardless of the actual I/O demand.

Provisioned IOPS SSD volumes are backed by solid-state drives (SSDs) and are designed to deliver fast, predictable, and consistent I/O performance, even under heavy load. They are ideal for use cases that require high performance and low latency, such as database applications, enterprise applications, and big data analytics.

Here is the official AWS documentation on Provisioned IOPS SSD: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

What is a Virtual Private Gateway on AWS Cloud?

A Virtual Private Gateway (VPG) is an Amazon Web Services (AWS) component that acts as a gateway between your Amazon Virtual Private Cloud (VPC) and your customer gateway. It allows you to establish a secure and private connection from your VPC to your on-premises network using an IPsec VPN connection.

A VPG provides secure communication between your VPC and your data center, enabling you to extend your network into the AWS Cloud and access your cloud-based resources as if they were part of your own data center. With a VPG, you can securely access your cloud resources, such as Amazon Elastic Compute Cloud (EC2) instances and Amazon Simple Storage Service (S3) buckets, from your on-premises network.

Here is the official AWS documentation on Virtual Private Gateway: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

What is Amazon Virtual Private Cloud (VPC)?

Amazon Virtual Private Cloud (Amazon VPC) is a secure and scalable virtual network in the AWS Cloud. It enables you to launch AWS resources into a virtual network that you’ve defined. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you’d operate in your own data center.

Amazon VPC provides advanced security features, such as security groups and network access control lists, to enable inbound and outbound filtering at the instance and subnet level. You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your web servers that has direct access to the Internet, and place your backend systems, such as databases or application servers, in a private-facing subnet with no Internet access.

Here is the official AWS documentation on Amazon Virtual Private Cloud (VPC): https://aws.amazon.com/vpc/

What is Amazon Account Trust Policy?

An Amazon Account Trust Policy refers to the set of rules or guidelines that govern the trust relationship between AWS customers and AWS. It outlines the responsibilities of both parties in regards to the use of AWS services and the protection of customer data.

The trust policy defines the security controls that AWS has implemented to protect customer data and the measures that customers must take to secure their data within the AWS environment. The trust policy also outlines the responsibilities of AWS in the event of a security breach, such as incident response and notification, and the responsibilities of customers, such as reporting security incidents.

Here is the official AWS documentation on AWS Customer Agreement: https://aws.amazon.com/agreement/

What is Amazon Aurora Serverless?

Amazon Aurora Serverless is a variant of Amazon Aurora, a relational database service that is fully managed and highly available. With Amazon Aurora Serverless, you don’t need to manage any infrastructure, as the service automatically provisions and scales the underlying resources based on the application’s workloads.

Amazon Aurora Serverless provides a serverless relational database solution, which means that the service automatically starts, scales, and shuts down the database engine based on usage, so you only pay for what you use. You can use Aurora Serverless for applications that experience frequent spikes in traffic and require fast performance, as the service can automatically scale resources as needed.

Here is the official AWS documentation on Amazon Aurora Serverless: https://aws.amazon.com/rds/aurora/serverless/

What is Amazon CloudFront?

Amazon CloudFront is a content delivery network (CDN) service provided by Amazon Web Services (AWS). It is used to distribute static and dynamic web content, such as HTML pages, images, videos, and APIs, to end-users with low latency and high data transfer speeds.

CloudFront works by caching content at edge locations around the world, so that when a user requests content, it can be delivered from the nearest edge location, rather than from the origin server. This results in faster content delivery and improved user experience.

CloudFront integrates with other AWS services, such as Amazon S3 and Amazon EC2, as well as with custom origin servers, making it a flexible and scalable solution for distributing content.

Here is the official AWS documentation on Amazon CloudFront: https://aws.amazon.com/cloudfront/

What is edge locations on Amazon AWS?

Edge locations on Amazon AWS are data centers located at the edge of the network, closest to end-users. They are used by Amazon CloudFront, a content delivery network (CDN) service, to cache and distribute static and dynamic web content, such as HTML pages, images, videos, and APIs.

When a user requests content that is being served by CloudFront, the service routes the request to the nearest edge location, rather than to the origin server. This results in faster content delivery and improved user experience.

Edge locations are strategically placed around the world, so that content can be delivered to users with low latency and high data transfer speeds, regardless of their location. The number of edge locations and their locations are subject to change as Amazon continues to expand its network.

Here is the official AWS documentation on Amazon CloudFront Edge Locations: https://aws.amazon.com/cloudfront/features/#Global_Content_Delivery

What is AWS WAF on CloudFront?

AWS WAF (Web Application Firewall) is a managed service that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. When integrated with Amazon CloudFront, AWS WAF can provide protection for your CloudFront distributions, which are used to deliver content from your origin servers to your end-users through a global network of edge locations.

AWS WAF enables you to create custom rules to block, allow, or count web requests based on conditions like IP addresses, HTTP headers, and content strings. This helps you to identify and block malicious requests and ensure that your applications are only serving legitimate traffic.

With AWS WAF and CloudFront, you can implement protection for your applications at the edge of the network, reducing the latency and network load on your origin servers.

Here is a link to the AWS WAF official documentation: https://aws.amazon.com/waf/

What is the overall concept of content delivery network (CDN)?

A content delivery network (CDN) is a distributed network of servers that are used to deliver content to end-users over the internet. The primary goal of a CDN is to improve the performance, reliability, and availability of content delivery by caching the content at various edge locations that are closer to the end-users.

CDNs are commonly used for delivering static web content, such as images, videos, and HTML pages, as well as for delivering dynamic content, such as APIs and real-time data. They work by copying the content from the origin server and caching it at multiple edge locations around the world. When a user requests content, the CDN routes the request to the nearest edge location, where the cached content is delivered. This reduces the latency and improves the data transfer speeds, resulting in faster content delivery and a better user experience.

CDNs are widely used by websites, online services, and media companies to deliver content to their users. They are also used to offload traffic from the origin server, which can reduce the load and improve the reliability of the origin server.

Here is a link to the Wikipedia article on Content Delivery Network: https://en.wikipedia.org/wiki/Content_delivery_network

What is Amazon CloudWatch?

Amazon CloudWatch is a monitoring service provided by Amazon Web Services (AWS) that provides operational and performance data on AWS resources and applications. With CloudWatch, you can monitor and collect metrics, set alarms, and take automated actions in response to changes in your resources.

CloudWatch provides a variety of metrics and log data from various AWS services, including Amazon EC2, Amazon S3, Amazon RDS, and many others. You can use CloudWatch to monitor performance and troubleshoot issues, track resource utilization and usage trends, and optimize costs.

CloudWatch also provides a centralized log repository that you can use to store and access logs from multiple AWS resources. This can help you to quickly identify and troubleshoot issues with your applications and infrastructure.

Additionally, CloudWatch offers integrations with other AWS services and third-party tools, allowing you to extend its monitoring and alerting capabilities.

Here is a link to the Amazon CloudWatch official documentation: https://aws.amazon.com/cloudwatch/

What is Amazon Cognito on AWS Cloud?

Amazon Cognito is a service provided by Amazon Web Services (AWS) for managing user authentication and identity management for web and mobile applications. It enables you to create unique identities for your users, authenticate users with your own authentication systems or with AWS, and manage authorization for your users.

Cognito provides a simple and secure way to manage user sign-up, sign-in, and access control, allowing you to focus on building your application instead of managing user identities.

Cognito supports both standard user sign-up and sign-in and social identity providers, such as Amazon, Facebook, Google, and others. You can also add multi-factor authentication to further secure user access.

Cognito also integrates with other AWS services, such as Amazon S3, Amazon API Gateway, and AWS Lambda, to provide a complete solution for building and deploying secure and scalable web and mobile applications.

Here is a link to the Amazon Cognito official documentation: https://aws.amazon.com/cognito/

What is Amazon Elastic Container Registry (ECR)?

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. With Amazon ECR, you can host your Docker images in a highly available and scalable infrastructure, and integrate with other AWS services like Amazon ECS, Amazon EKS, AWS Fargate, and others.

You can use Amazon ECR to store, manage, and deploy Docker images for your applications, and share images across teams within your organization. You can also control access to your images using AWS Identity and Access Management (IAM) policies, and monitor the security and compliance of your images with Amazon ECR Image Scanning.

Here is a link to the Amazon ECR official documentation: https://aws.amazon.com/ecr/

What is Amazon Elastic Container Service (ECS)?

Amazon Elastic Container Service (ECS) is a fully managed container orchestration service provided by AWS. It enables you to run, manage, and scale Docker containers on a cluster of Amazon EC2 instances.

With ECS, you can easily deploy, run, and manage containers and microservices applications, and take advantage of the scalability and availability of the AWS infrastructure. ECS provides a secure and reliable platform for running containers, and helps you manage the infrastructure and operations required to run your containers in production.

Some of the key features of Amazon ECS include:

  • Cluster management: ECS makes it easy to create, manage, and scale a cluster of EC2 instances that run your containers.
  • Task definition: ECS enables you to define and manage the containers, resources, and configuration of your applications in a task definition.
  • Load balancing: ECS integrates with AWS Elastic Load Balancer, enabling you to distribute incoming traffic evenly across your containers.
  • Automated scaling: ECS provides the ability to automatically scale the number of containers running in your cluster based on demand.

Here is a link to the official Amazon ECS documentation: https://aws.amazon.com/ecs/

What is Amazon IAM role?

Amazon IAM (Identity and Access Management) role is a feature in AWS that allows you to manage access to AWS resources. An IAM role defines a set of permissions that determine what actions an identity, such as an IAM user, EC2 instance, or a Lambda function, can perform in your AWS environment. IAM roles can be assumed by AWS services, applications running on EC2 instances, or other AWS users. They can also be assumed cross-account by trusted AWS accounts, enabling a flexible and secure way of granting access to resources in AWS.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

What is Amazon Inspector?

Amazon Inspector is a security assessment service that helps improve the security and compliance of applications deployed on AWS. It enables you to automatically identify security vulnerabilities and deviations from best practices in your Amazon EC2 instances, and provides recommendations for remediation. With Amazon Inspector, you can assess the security and compliance of your applications and identify potential security issues before they are exploited. The service automatically runs a set of security checks and generates a report that highlights security findings and provides recommendations for improvement.

Here is the official documentation for Amazon Inspector: https://aws.amazon.com/inspector/

What is Amazon RDS?

Amazon Relational Database Service (Amazon RDS) is a managed relational database service provided by Amazon Web Services (AWS). It makes it easy to set up, operate, and scale a relational database in the cloud. With Amazon RDS, you can choose from popular database engines such as Amazon Aurora, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL, and have the database software and infrastructure managed by AWS.

Amazon RDS handles tasks such as database setup, patching, backup, recovery, and failure detection and repair, freeing you from manual management tasks so that you can focus on application development and business initiatives.

Here is the official documentation for Amazon RDS: https://aws.amazon.com/rds/

What is Amazon RDS DB instances?

Amazon RDS DB instances are database instances managed by Amazon Relational Database Service (RDS). They allow customers to run relational databases in the AWS cloud, freeing up time and resources from database management tasks such as setup, patching, backups, and replication. Amazon RDS supports several database engines including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Microsoft SQL Server, and Oracle.

https://aws.amazon.com/rds/db-instances/

What is Application Load Balancer in AWS Cloud?

Amazon Application Load Balancer is a type of Load Balancer in AWS Cloud, it operates at the application layer (layer 7) of the OSI model and routes incoming traffic to one or multiple target resources, such as EC2 instances, containers, or IP addresses, based on the content of the request. It provides advanced request routing, content-based routing, SSL/TLS decryption, and container monitoring, among other features.

Here is a link to the official documentation for Amazon Application Load Balancer: https://aws.amazon.com/elasticloadbalancing/applicationloadbalancer/

What is Auto Scaling group in AWS Cloud?

Auto Scaling group is a component of Amazon Web Services (AWS) Auto Scaling service that allows you to automatically increase or decrease the number of EC2 instances in a group based on certain criteria, such as traffic demand or CPU utilization. Auto Scaling groups ensure that your application always has the right number of instances to handle incoming traffic and maintain performance. They also allow you to scale up and down quickly to respond to changes in traffic demand, reducing the risk of downtime or poor performance due to insufficient capacity.

https://docs.aws.amazon.com/en_us/autoscaling/ec2/userguide/AutoScalingGroup.html

What is AWS Fargate?

WS Fargate is a technology for Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) that allows users to run containers without having to manage the underlying instances. Fargate abstracts away the infrastructure management, so users can focus on building and operating their applications. With Fargate, users simply define the desired number of tasks and the required resources, and Fargate launches and manages the containers in the cloud.

Reference: https://aws.amazon.com/fargate/

What is AWS Firewall Manager?

AWS Firewall Manager is a security management service in Amazon Web Services (AWS) that provides centralized policy management for Amazon Virtual Private Cloud (VPC) security groups and Amazon Web Application Firewall (WAF) rules. It allows you to define your security policies once and apply them across all of your accounts and applications, reducing the risk of security misconfigurations and increasing operational efficiency. With Firewall Manager, you can also automate security updates and simplify compliance by applying pre-defined security templates.

https://aws.amazon.com/firewall-manager/

What is AWS Lambda Function?

AWS Lambda is a serverless computing service offered by Amazon Web Services (AWS) that allows you to run code without having to provision or manage servers. It allows you to run your code in response to events such as changes to data in an Amazon S3 bucket, or updates to a DynamoDB table. Lambda automatically scales your application in response to incoming request traffic and it charges you only for the compute time that you consume.

Reference: https://aws.amazon.com/lambda/

What is AWS Shield Advanced policy?

AWS Shield Advanced is a security service offered by Amazon Web Services (AWS) that provides protection against distributed denial of service (DDoS) attacks for applications running on the AWS cloud. It offers advanced features such as traffic filtering, real-time attack visibility, and automatic inline mitigations that can protect your applications even when a large-scale DDoS attack is in progress. AWS Shield Advanced also provides 24/7 access to AWS DDoS response experts who can help you mitigate attacks and restore normal traffic flow to your applications.

https://aws.amazon.com/shield/advanced/

What is AWS Snowball Edge Storage Optimized Device?

Amazon Snowball Edge Storage Optimized device is a data transfer appliance provided by Amazon Web Services (AWS) that helps transfer large amounts of data into and out of AWS. It is designed to be rugged and secure, and it can transfer data at high speeds, making it ideal for transferring large amounts of data in environments with limited or unreliable network connectivity. The device is equipped with high-performance storage and compute capabilities, allowing you to run compute functions and store data locally. The Snowball Edge Storage Optimized device also integrates with AWS Snowball and AWS Snowmobile for large-scale data transfers.

https://aws.amazon.com/snowball-edge/

AWS Snowball

AWS Snowball is a data migration service provided by Amazon Web Services (AWS). It helps to transfer large amounts of data into and out of the AWS Cloud, particularly when the amount of data being transferred is too large to be done over the internet in a reasonable time frame. AWS Snowball uses physical storage devices, called Snowballs, that are shipped to customers to transfer data to and from the AWS Cloud. The data transfer is performed in parallel to increase the speed of data migration and to minimize the impact on the customer’s network.

https://aws.amazon.com/snowball/

AWS Snowmobile

Amazon Snowmobile is a data transfer service provided by AWS that allows you to transfer large amounts of data (100PB+) into and out of the AWS Cloud. This service is intended for use cases such as disaster recovery, backup and archive, and migration of large datasets. The Snowmobile is a secure, climate-controlled truck that provides secure transfer of data from your on-premise infrastructure to AWS. Once the data is uploaded, it can be stored in Amazon S3 or Amazon Glacier for long-term retention or further processing.

Reference: https://aws.amazon.com/snowmobile/

What is Cross-site scripting for AWS Cloud?

Cross-site scripting (XSS) is a type of security vulnerability that allows attackers to inject malicious code into a website that is viewed by other users. In an XSS attack, the attacker creates a payload (usually in the form of a script) that is executed by the browser of a victim who visits the targeted website. This can result in sensitive information being stolen or malicious actions being performed on behalf of the victim. In the context of AWS Cloud, protecting against XSS attacks is an important aspect of securing web-based applications and resources. This can be achieved through a combination of secure coding practices, input validation, and using appropriate security controls such as web application firewalls (WAFs) to block XSS attacks.

https://aws.amazon.com/security/security-bulletins/cross-site-scripting-xss/

What is Customer Gateway on AWS Cloud?

A Customer Gateway in AWS Cloud refers to a logical representation of a customer’s on-premises VPN (Virtual Private Network) device. It is used in Amazon Virtual Private Cloud (VPC) to allow communication between an Amazon VPC and a customer’s network. The customer gateway is the device in the customer’s network that routes the data to Amazon VPC over the Internet or a VPN connection.

Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_VPN.html

What is Disaster Recovery in AWS Cloud?

What is DynamoDB with DynamoDB Accelerator (DAX)?

What is DynamoDB?

What is General Purposed SSD Storage in AWS Cloud?

What is IOPS in Amazon Cloud?

What is Load Balancer for AWS Cloud?

What is multiple Availability Zones in AWS Cloud?

What is Network Load Balancer in AWS Cloud?

What is Pilot Light for Disaster Recovery in AWS Cloud?

What is Provisioned IOPS storage in AWS Cloud?

What is single Availability Zone in AWS Cloud?

What is VPC private subnet on AWS Cloud?

What is Warm Standby for Disaster Recovery in AWS Cloud?

What means RTO for Disaster Recovery in AWS Cloud?

Published by Pedro Carvalho

Apaixonado por análise de dados e Power BI

Deixe uma resposta

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: