[ { "question": ": A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage ser vers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Ser vice (DBaaS) platform and also uses a third party API to fetch certain data for their application. On e of the developers was instructed to create the environment variables for the MongoDB database host name, username, and password as well as the API credentials that will be used by the Lambda functio n for DEV, SIT, UAT, and PROD environments. Considering that the Lambda function is storing sen sitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentia ls in plain text? Select the best option that provides ma ximum security.", "options": [ "A. Enable SSL encryption that leverages on AWS Cl oudHSM to store and encrypt the sensitive", "B. AWS Lambda does not provide encryption for the environment variables. Deploy your code", "C. There is no need to do anything because, by de fault, AWS Lambda already encrypts the", "D. Create a new KMS key and use it to enable encr yption helpers that leverage on AWS Key" ], "correct": "D. Create a new KMS key and use it to enable encr yption helpers that leverage on AWS Key", "explanation": "Explanation When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lam bda function is invoked, those values are decrypted and made available to the Lambda code. The first time you create or update Lambda function s that use environment variables in a region, a def ault service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, if you wish to use encryption h elpers and use KMS to encrypt environment variables after your Lambda function is created, you must cre ate your own AWS KMS key and choose it instead of the default key. The default key will give errors w hen chosen. Creating your own key gives you more flexibility, including the ability to create, rotat e, disable, and define access controls, and to audi t the encryption keys used to protect your data. The option that says: There is no need to do anythi ng because, by default, AWS Lambda already encrypts the environment variables using the AWS Ke y Management Service is incorrect. Although Lambda encrypts the environment variables in your f unction by default, the sensitive information would still be visible to other users who have access to the Lambda console. This is because Lambda uses a default KMS key to encrypt the variables, which is usually accessible by other users. The best option in this scenario is to use encryption helpers to secure you r environment variables. The option that says: Enable SSL encryption that le verages on AWS CloudHSM to store and encrypt the sensitive information is also incorrect since e nabling SSL would encrypt data only when in-transit . Your other teams would still be able to view the pl aintext at-rest. Use AWS KMS instead. The option that says: AWS Lambda does not provide e ncryption for the environment variables. Deploy your code to an EC2 instance instead is inco rrect since, as mentioned, Lambda does provide encryption functionality of environment variables. References: https://docs.aws.amazon.com/lambda/latest/dg/env_va riables.html#env_encrypt https://docs.aws.amazon.com/lambda/latest/dg/tutori al-env_console.html Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/ AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY", "references": "" }, { "question": ": A company hosted an e-commerce website on an Auto S caling group of EC2 instances behind an Application Load Balancer. The Solutions Architect noticed that the website is receiving a large numbe r of illegitimate external requests from multiple system s with IP addresses that constantly change. To reso lve the performance issues, the Solutions Architect mus t implement a solution that would block the illegit imate requests with minimal impact on legitimate traffic. Which of the following options fulfills this requir ement?", "options": [ "A. Create a regular rule in AWS WAF and associate the web ACL to an Application Load", "B. Create a rate-based rule in AWS WAF and associ ate the web ACL to an Application Load", "C. Create a custom rule in the security group of the Application Load Balancer to block the", "D. Create a custom network ACL and associate it w ith the subnet of the Application Load" ], "correct": "B. Create a rate-based rule in AWS WAF and associ ate the web ACL to an Application Load", "explanation": "Explanation AWS WAF is tightly integrated with Amazon CloudFron t, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync services that AWS cus tomers commonly use to deliver content for their websites and applications. When you use AWS W AF on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users. This means security doesn't come at the expense of performance. Blocked requests are stopped before they reach your web servers. When y ou use AWS WAF on regional services, such as Applicati on Load Balancer, Amazon API Gateway, and AWS AppSync, your rules run in the region and can be us ed to protect Internet-facing resources as well as internal resources. certkingdom.cm A rate-based rule tracks the rate of requests for e ach originating IP address and triggers the rule ac tion on IPs with rates that go over a limit. You set the li mit as the number of requests per 5-minute time spa n. You can use this type of rule to put a temporary block on requests from an IP address that's sending exces sive requests. Based on the given scenario, the requirement is to limit the number of requests from the illegitimate requests without affecting the genuine requests. To accomplish this requirement, you can use AWS WAF web ACL. There are two types of rules in creating y our own web ACL rule: regular and rate-based rules. You need to select the latter to add a rate limit to your web ACL. After c reating the web ACL, you can associate it with ALB. When the rule action triggers, AWS WAF applies the action to additional requests from the IP address u ntil the request rate falls below the limit. Hence, the correct answer is: Create a rate-based r ule in AWS WAF and associate the web ACL to an Application Load Balancer. The option that says: Create a regular rule in AWS WAF and associate the web ACL to an Application Load Balancer is incorrect because a re gular rule only matches the statement defined in th e rule. If you need to add a rate limit to your rule, you should create a rate-based rule. The option that says: Create a custom network ACL a nd associate it with the subnet of the Application Load Balancer to block the offending requests is in correct. Although NACLs can help you block incoming traffic, this option wouldn't be able to l imit the number of requests from a single IP addres s that is dynamically changing. The option that says: Create a custom rule in the s ecurity group of the Application Load Balancer to block the offending requests is incorrect because t he security group can only allow incoming traffic. Remember that you can't deny traffic using security groups. In addition, it is not capable of limiting the rate of traffic to your application unlike AWS WAF. References: https://docs.aws.amazon.com/waf/latest/developergui de/waf-rule-statement-type-rate-based.html https://aws.amazon.com/waf/faqs/ Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", "references": "" }, { "question": ": There was an incident in your production environmen t where the user data stored in the S3 bucket has b een accidentally deleted by one of the Junior DevOps En gineers. The issue was escalated to your manager an d after a few days, you were instructed to improve th e security and protection of your AWS resources. What combination of the following options will prot ect the S3 objects in your bucket from both acciden tal deletion and overwriting? (Select TWO.)", "options": [ "A. Enable Versioning", "B. Enable Amazon S3 Intelligent-Tiering", "C. Provide access to S3 data strictly through pre -signed URL only", "D. Enable Multi-Factor Authentication Delete" ], "correct": "", "explanation": "Explanation By using Versioning and enabling MFA (Multi-Factor Authentication) Delete, you can secure and recover your S3 objects from accidental deletion or overwri te. Versioning is a means of keeping multiple variants of an object in the same bucket. Versioning-enabled buckets enable you to recover objects from accident al deletion or overwrite. You can use versioning to preserve, retrieve, and restore every version of ev ery object stored in your Amazon S3 bucket. With versioning, you can easily recover from both uninte nded user actions and application failures. You can also optionally add another layer of securi ty by configuring a bucket to enable MFA (Multi-Fac tor Authentication) Delete, which requires additional a uthentication for either of the following operation s: - Change the versioning state of your bucket - Permanently delete an object version MFA Delete requires two forms of authentication tog ether: - Your security credentials - The concatenation of a valid serial number, a spa ce, and the six-digit code displayed on an approved authentication device Providing access to S3 data strictly through pre-si gned URL only is incorrect since a pre-signed URL gives access to the object identified in the URL. P re-signed URLs are useful when customers perform an object upload to your S3 bucket, but does not help in preventing accidental deletes. Disallowing S3 Delete using an IAM bucket policy is incorrect since you still want users to be able to delete objects in the bucket, and you just want to prevent accidental deletions. Disallowing S3 Delete using an IAM bucket policy will restrict all delete opera tions to your bucket. Enabling Amazon S3 Intelligent-Tiering is incorrect since S3 intelligent tiering does not help in this situation.", "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Ver sioning.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" }, { "question": ": A telecommunications company is planning to give AW S Console access to developers. Company policy mandates the use of identity federation and role-ba sed access control. Currently, the roles are alread y assigned using groups in the corporate Active Directory. In this scenario, what combination of the following services can provide developers access to the AWS console? (Select TWO.)", "options": [ "A. AWS Directory Service Simple AD", "B. IAM Roles", "C. IAM Groups", "D. AWS Directory Service AD Connector" ], "correct": "", "explanation": "Explanation Considering that the company is using a corporate A ctive Directory, it is best to use AWS Directory Service AD Connector for easier integration. In add ition, since the roles are already assigned using g roups in the corporate Active Directory, it would be bett er to also use IAM Roles. Take note that you can as sign an IAM Role to the users or groups from your Active Directory once it is integrated with your VPC via the AWS Directory Service AD Connector. AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices , and administrators use them to manage access to inf ormation and resources. AWS Directory Service provides multiple directory choices for customers w ho want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, gr oups, devices, and access. AWS Directory Service Simple AD is incorrect becaus e this just provides a subset of the features offer ed by AWS Managed Microsoft AD, including the ability to manage user accounts and group memberships, create and apply group policies, securely connect t o Amazon EC2 instances, and provide Kerberos-based single sign-on (SSO). In this scenario, the more su itable component to use is the AD Connector since i t is a directory gateway with which you can redirect direc tory requests to your on-premises Microsoft Active Directory. IAM Groups is incorrect because this is just a coll ection of IAM users. Groups let you specify permiss ions for multiple users, which can make it easier to manage the permissions for those users. In this scenario, the more suitable one to use is IAM Roles in order for permissions to create AWS Directory Service resourc es. Lambda is incorrect because this is primarily used for serverless computing.", "references": "https://aws.amazon.com/blogs/security/how-to-connec t-your-on-premises-active-directory-to-aws-using- ad-connector/ Check out these AWS IAM and Directory Service Cheat Sheets: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ https://tutorialsdojo.com/aws-directory-service/ Here is a video tutorial on AWS Directory Service: https://youtu.be/4XeqotTYBtY" }, { "question": ": An AI-powered Forex trading application consumes th ousands of data sets to train its machine learning model. The application's workload requires a high-p erformance, parallel hot storage to process the tra ining datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit. Which of the following Amazon storage services shou ld the developer use?", "options": [ "A. Use Amazon FSx For Windows File Server and Ama zon S3 for hot and cold storage", "B. Use Amazon Elastic File System and Amazon S3 f or hot and cold storage respectively.", "C. Use Amazon FSx For Lustre and Amazon EBS Provi sioned IOPS SSD (io1) volumes for hot", "D. Use Amazon FSx For Lustre and Amazon S3 for ho t and cold storage respectively." ], "correct": "D. Use Amazon FSx For Lustre and Amazon S3 for ho t and cold storage respectively.", "explanation": "Explanation Hot storage refers to the storage that keeps freque ntly accessed data (hot data). Warm storage refers to the storage that keeps less frequently accessed data (w arm data). Cold storage refers to the storage that keeps rarely accessed data (cold data). In terms of prici ng, the colder the data, the cheaper it is to store , and the costlier it is to access when needed. Amazon FSx For Lustre is a high-performance file sy stem for fast processing of workloads. Lustre is a popular open-source parallel file system which stor es data across multiple network file servers to maximize performance and reduce bottlenecks. Amazon FSx for Windows File Server is a fully manag ed Microsoft Windows file system with full support for the SMB protocol, Windows NTFS, Microso ft Active Directory (AD) Integration. Amazon Elastic File System is a fully-managed file storage service that makes it easy to set up and sc ale file storage in the Amazon Cloud. Amazon S3 is an object storage service that offers industry-leading scalability, data availability, se curity, and performance. S3 offers different storage tiers for different use cases (frequently accessed data, infrequently accessed data, and rarely accessed dat a). The question has two requirements: High-performance, parallel hot storage to process t he training datasets concurrently. Cost-effective cold storage to keep the archived da tasets that are accessed infrequently In this case, we can use Amazon FSx For Lustre for the first requirement, as it provides a high- performance, parallel file system for hot data. On the second requirement, we can use Amazon S3 for storing cold data. Amazon S3 supports a cold storag e system via Amazon S3 Glacier / Glacier Deep Archive. Hence, the correct answer is: Use Amazon FSx For Lu stre and Amazon S3 for hot and cold storage respectively. Using Amazon FSx For Lustre and Amazon EBS Provisio ned IOPS SSD (io1) volumes for hot and cold storage respectively is incorrect because the Provisioned IOPS SSD (io1) volumes are designed for storing hot data (data that are frequently accessed ) used in I/O-intensive workloads. EBS has a storag e option called \"Cold HDD,\" but due to its price, it is not ideal for data archiving. EBS Cold HDD is mu ch more expensive than Amazon S3 Glacier / Glacier Dee p Archive and is often utilized in applications whe re sequential cold data is read less frequently. Using Amazon Elastic File System and Amazon S3 for hot and cold storage respectively is incorrect. Although EFS supports concurrent access to data, it does not have the high-performance ability that is required for machine learning workloads. Using Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively is incorrect because Amazon FSx For Windows File Serve r does not have a parallel file system, unlike Lust re. References: https://aws.amazon.com/fsx/ https://docs.aws.amazon.com/whitepapers/latest/cost -optimization-storage-optimization/aws-storage- services.html https://aws.amazon.com/blogs/startups/picking-the-r ight-data-store-for-your-workload/ Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", "references": "" }, { "question": ":A newly hired Solutions Architect is assigned to ma nage a set of CloudFormation templates that are use d in the company's cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. 1. 1. { 2. 2. \"Version\": \"2012-10-17\", 3. 3. \"Statement\": [ 4. 4. { 5. 5. \"Effect\": \"Allow\", 6. 6. \"Action\": [ 7. 7. \"s3:Get*\", 8. 8. \"s3:List*\" 9. 9. ], 10.10. \"Resource\": \"*\" 11.11. }, 12.12. { 13.13. \"Effect\": \"Allow\", 14.14. \"Action\": \"s3:PutObject\", 15.15. \"Resource\": \"arn:aws:s3:::boracay/*\" 1. 16. } 2. 17. ] 3. 18. }", "options": [ "A. An IAM user with this IAM policy is allowed to read objects in the boracay S3 bucket", "B. An IAM user with this IAM policy is allowed to change access rights for the boracay", "C. An IAM user with this IAM policy is allowed to write objects into the boracay S3", "D. An IAM user with this IAM policy is allowed to read objects from the boracay S3" ], "correct": "", "explanation": "Explanation You manage access in AWS by creating policies and a ttaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an o bject in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine wh ether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS s upports six types of policies: identity-based policies, resource-based policies, permissions boun daries, AWS Organizations SCPs, ACLs, and session policies. IAM policies define permissions for action regardle ss of the method that you use to perform the operat ion. For example, if a policy allows the GetUser action, the n a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API . When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sig n in to the console using a user name and password. Or if p rogrammatic access is allowed, the user can use access keys to work with the CLI or API. Based on the provided IAM policy, the user is only allowed to get, write, and list all of the objects for the boracay s3 bucket. The s3:PutObject basically means that you can submit a PUT object request to the S3 bucket to store data. Hence, the correct answers are: - An IAM user with this IAM policy is allowed to re ad objects from all S3 buckets owned by the account. - An IAM user with this IAM policy is allowed to wr ite objects into the boracay S3 bucket. - An IAM user with this IAM policy is allowed to re ad objects from the boracay S3 bucket. The option that says: An IAM user with this IAM pol icy is allowed to change access rights for the boracay S3 bucket is incorrect because the template does not have any statements which allow the user to change access rights in the bucket. The option that says: An IAM user with this IAM pol icy is allowed to read objects in the boracay S3 bucket but not allowed to list the objects in the b ucket is incorrect because it can clearly be seen i n the template that there is a s3:List* which permits the user to list objects. The option that says: An IAM user with this IAM pol icy is allowed to read and delete objects from the boracay S3 bucket is incorrect. Although you can re ad objects from the bucket, you cannot delete any objects. References: https://docs.aws.amazon.com/AmazonS3/latest/API/RES TObjectOps.html https://docs.aws.amazon.com/IAM/latest/UserGuide/ac cess_policies.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A retail website has intermittent, sporadic, and un predictable transactional workloads throughout the day that are hard to predict. The website is currently hosted on-premises and is slated to be migrated to AWS. A new relational database is needed that autoscales c apacity to meet the needs of the application's peak load and scales back down when the surge of activity is over. Which of the following option is the MOST cost-effe ctive and suitable database setup in this scenario?", "options": [ "A. Launch a DynamoDB Global table with Auto Scali ng enabled.", "B. Launch an Amazon Aurora Serverless DB cluster then set the minimum and maximum", "C. Launch an Amazon Redshift data warehouse clust er with Concurrency Scaling.", "D. Launch an Amazon Aurora Provisioned DB cluster with burstable performance DB" ], "correct": "B. Launch an Amazon Aurora Serverless DB cluster then set the minimum and maximum", "explanation": "Explanation Amazon Aurora Serverless is an on-demand, auto-scal ing configuration for Amazon Aurora. An Aurora Serverless DB cluster is a DB cluster that automati cally starts up, shuts down, and scales up or down its compute capacity based on your application's needs. Aurora Serverless provides a relatively simple, co st- effective option for infrequent, intermittent, spor adic or unpredictable workloads. It can provide thi s because it automatically starts up, scales compute capacity to match your application's usage and shuts down when it's not in use. Take note that a non-Serverless DB cluster for Auro ra is called a provisioned DB cluster. Aurora Serve rless clusters and provisioned clusters both have the sam e kind of high-capacity, distributed, and highly av ailable storage volume. When you work with Amazon Aurora without Aurora Ser verless (provisioned DB clusters), you can choose your DB instance class size and create Aurora Repli cas to increase read throughput. If your workload changes, you can modify the DB instance class size and change the number of Aurora Replicas. This mode l works well when the database workload is predictabl e, because you can adjust capacity manually based o n the expected workload. However, in some environments, workloads can be int ermittent and unpredictable. There can be periods o f heavy workloads that might last only a few minutes or hours, and also long periods of light activity, or even no activity. Some examples are retail websites with in termittent sales events, reporting databases that p roduce reports when needed, development and testing enviro nments, and new applications with uncertain requirements. In these cases and many others, it ca n be difficult to configure the correct capacity at the right times. It can also result in higher costs when you pay for capacity that isn't used. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom With Aurora Serverless , you can create a database endpoint without specifying the DB instance class s ize. You set the minimum and maximum capacity. With Auro ra Serverless, the database endpoint connects to a proxy fleet that routes the workload to a fleet of resources that are automatically scaled. Because of the proxy fleet, connections are continuous as Aurora Serverless sca les the resources automatically based on the minimu m and maximum capacity specifications. Database clien t applications don't need to change to use the prox y fleet. Aurora Serverless manages the connections automatic ally. Scaling is rapid because it uses a pool of \"w arm\" resources that are always ready to service requests . Storage and processing are separate, so you can s cale down to zero processing and pay only for storage. Aurora Serverless introduces a new serverless DB en gine mode for Aurora DB clusters. Non-Serverless DB clusters use the provisioned DB engine mode. Hence, the correct answer is: Launch an Amazon Auro ra Serverless DB cluster then set the minimum and maximum capacity for the cluster. The option that says: Launch an Amazon Aurora Provi sioned DB cluster with burstable performance DB instance class types is incorrect because an Aurora Provisioned DB cluster is not suitable for intermi ttent, sporadic, and unpredictable transactional workloads . This model works well when the database workload is predictable because you can adjust capacity manuall y based on the expected workload. A better database setup here is to use an Amazon Aurora Serverless cl uster. The option that says: Launch a DynamoDB Global tabl e with Auto Scaling enabled is incorrect because certkingdom although it is using Auto Scaling, the scenario exp licitly indicated that you need a relational databa se to handle your transactional workloads. DynamoDB is a NoSQL database and is not suitable for this use cas e. certkingdom Moreover, the use of a DynamoDB Global table is not warranted since this is primarily used if you need a certkingdom fully managed, multi-region, and multi-master datab ase that provides fast, local, read and write certkingdom certkingdom performance for massively scaled, global applicatio ns. certkingdom certkingdom The option that says: Launch an Amazon Redshift dat a warehouse cluster with Concurrency Scaling is certkingdom incorrect because this type of database is primaril y used for online analytical processing (OLAP) and not certkingdom for online transactional processing (OLTP). Concurr ency Scaling is simply an Amazon Redshift feature certkingdom that automatically and elastically scales query pro cessing power of your Redshift cluster to provide certkingdom consistently fast performance for hundreds of concu rrent queries. certkingdom certkingdom References: certkingdom certkingdom https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-serverless.how-it-works.html certkingdom certkingdom certkingdom https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-serverless.html certkingdom certkingdom", "references": "" }, { "question": ": certkingdom certkingdom A popular social media website uses a CloudFront we b distribution to serve their static contents to th eir certkingdom millions of users around the globe. They are receiv ing a number of complaints recently that their user s take certkingdom a lot of time to log into their website. There are also occasions when their users are getting HTTP 50 4 certkingdom errors. You are instructed by your manager to signi ficantly reduce the user's login time to further op timize certkingdom the system. certkingdom certkingdom Which of the following options should you use toget her to set up a cost-effective solution that can im prove certkingdom certkingdom your application's performance? (Select TWO.) certkingdom certkingdom", "options": [ "A. Customize the content that the CloudFront web distribution delivers to your users using", "B. Deploy your application to multiple AWS region s to accommodate your users around the", "C. Configure your origin to add a Cache-Control m ax-age directive to your objects, and", "D. Set up an origin failover by creating an origi n group with two origins. Specify one as the" ], "correct": "", "explanation": "Explanation/Reference: Explanation Lambda@Edge lets you run Lambda functions to custom ize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer . The functions run in response to CloudFront event s, without provisioning or managing servers. You can u se Lambda functions to change CloudFront requests and responses at the following points: - After CloudFront receives a request from a viewer (viewer request) - Before CloudFront forwards the request to the ori gin (origin request) - After CloudFront receives the response from the o rigin (origin response) - Before CloudFront forwards the response to the vi ewer (viewer response) In the given scenario, you can use Lambda@Edge to a llow your Lambda functions to customize the content that CloudFront delivers and to execute the authent ication process in AWS locations closer to the user s. In addition, you can set up an origin failover by crea ting an origin group with two origins with one as t he primary origin and the other as the second origin w hich CloudFront automatically switches to when the primary origin fails. This will alleviate the occas ional HTTP 504 errors that users are experiencing. Therefore, the correct answers are: - Customize the content that the CloudFront web dis tribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. - Set up an origin failover by creating an origin g roup with two origins. Specify one as the primary origin and the other as the second origin which Clo udFront automatically switches to when the primary origin returns specific HTTP status code fa ilure responses. The option that says: Use multiple and geographical ly disperse VPCs to various AWS regions then create a transit VPC to connect all of your resourc es. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serve rless Application Model (SAM) service is incorrect because of the same reason provided above . Although setting up multiple VPCs across various regions which are connected with a transit VPC is v alid, this solution still entails higher setup and maintenance costs. A more cost-effective option wou ld be to use Lambda@Edge instead. The option that says: Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution is incorrect because improving the cac he hit ratio for the CloudFront distribution is irr elevant in this scenario. You can improve your cache perfor mance by increasing the proportion of your viewer requests that are served from CloudFront edge cache s instead of going to your origin servers for conte nt. However, take note that the problem in the scenario is the sluggish authentication process of your glo bal users and not just the caching of the static object s. The option that says: Deploy your application to mu ltiple AWS regions to accommodate your users around the world. Set up a Route 53 record with lat ency routing policy to route incoming traffic to the region that provides the best latency to the us er is incorrect because although this may resolve t he performance issue, this solution entails a signific ant implementation cost since you have to deploy yo ur application to multiple AWS regions. Remember that the scenario asks for a solution that will improve the performance of the application with minimal cost. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/high_availability_origin_failover. html https://docs.aws.amazon.com/lambda/latest/dg/lambda -edge.html Check out these Amazon CloudFront and AWS Lambda Ch eat Sheets: https://tutorialsdojo.com/amazon-cloudfront/ https://tutorialsdojo.com/aws-lambda/", "references": "" }, { "question": ": A popular mobile game uses CloudFront, Lambda, and DynamoDB for its backend services. The player data is persisted on a DynamoDB table and the stati c assets are distributed by CloudFront. However, th ere are a lot of complaints that saving and retrieving player information is taking a lot of time. To improve the game's performance, which AWS servic e can you use to reduce DynamoDB response times from milliseconds to microseconds?", "options": [ "A. DynamoDB Auto Scaling", "B. Amazon ElastiCache", "C. AWS Device Farm", "D. Amazon DynamoDB Accelerator (DAX)" ], "correct": "D. Amazon DynamoDB Accelerator (DAX)", "explanation": "Explanation Amazon DynamoDB Accelerator (DAX) is a fully manage d, highly available, in-memory cache that can reduce Amazon DynamoDB response times from millisec onds to microseconds, even at millions of requests per second. Amazon ElastiCache is incorrect because although yo u may use ElastiCache as your database cache, it will not reduce the DynamoDB response time from mil liseconds to microseconds as compared with DynamoDB DAX. AWS Device Farm is incorrect because this is an app testing service that lets you test and interact wi th your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time. DynamoDB Auto Scaling is incorrect because this is primarily used to automate capacity management for your tables and global secondary indexes. References: https://aws.amazon.com/dynamodb/dax https://aws.amazon.com/device-farm Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/", "references": "" }, { "question": ": A popular social network is hosted in AWS and is us ing a DynamoDB table as its database. There is a requirement to implement a 'follow' feature where u sers can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution that y ou should implement to meet the requirement? A. Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from t he DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS.", "options": [ "B. Enable DynamoDB Stream and create an AWS Lambd a trigger, as well as the IAM role", "C. Set up a DAX cluster to access the source Dyna moDB table. Create a new DynamoDB", "D. Create a Lambda function that uses DynamoDB St reams Kinesis Adapter which will fetch" ], "correct": "B. Enable DynamoDB Stream and create an AWS Lambd a trigger, as well as the IAM role", "explanation": "Explanation A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoD B captures information about every modification to data items in the table. Whenever an application creates, updates, or delete s items in the table, DynamoDB Streams writes a str eam record with the primary key attribute(s) of the ite ms that were modified. A stream record contains information about a data modification to a single i tem in a DynamoDB table. You can configure the stre am so that the stream records capture additional informat ion, such as the \"before\" and \"after\" images of modified items. Amazon DynamoDB is integrated with AWS Lambda so th at you can create triggers--pieces of code that automatically respond to events in DynamoDB Streams . With triggers, you can build applications that re act to data modifications in DynamoDB tables. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the ta ble is modified, a new record appears in the table' s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. The Lambda function can perform any actions you specify, such as sending a notificatio n or initiating a workflow. Hence, the correct answer in this scenario is the o ption that says: Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role whic h contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message t o SNS Topic that will notify the subscribers via email. The option that says: Using the Kinesis Client Libr ary (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch da ta from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS is incorrect because although this is a valid solution, it is mi ssing a vital step which is to enable DynamoDB Stre ams. With the DynamoDB Streams Kinesis Adapter in place, you can begin developing applications via the KCL interface, with the API calls seamlessly direct ed at the DynamoDB Streams endpoint. Remember that the DynamoDB Stream feature is not enabled by defau lt. The option that says: Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via email when there is an update made by a particular user is incorrect because just like in the above, you have to manually enable DynamoDB Streams first before you can use its endpoint. The option that says: Set up a DAX cluster to acces s the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every u pdate made in the user data, the trigger will send data to the Lambda function which will then no tify the subscribers via email using SNS is incorrect because the DynamoDB Accelerator (DAX) fe ature is primarily used to significantly improve th e in- memory read performance of your database, and not t o capture the time-ordered sequence of item-level modifications. You should use DynamoDB Streams in t his scenario instead. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.Lambda.Tutorial.html Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/", "references": "" }, { "question": ": A suite of web applications is hosted in an Auto Sc aling group of EC2 instances across three Availabil ity Zones and is configured with default settings. There is a n Application Load Balancer that forwards the reque st to the respective target group on the URL path. The scale- in policy has been triggered due to the low number of incoming traffic to the application. Which EC2 instance will be the first one to be term inated by your Auto Scaling group?", "options": [ "A. The EC2 instance launched from the oldest laun ch configuration", "B. The instance will be randomly selected by the Auto Scaling group", "C. The EC2 instance which has the least number of user sessions", "D. The EC2 instance which has been running for th e longest time" ], "correct": "A. The EC2 instance launched from the oldest laun ch configuration", "explanation": "Explanation The default termination policy is designed to help ensure that your network architecture spans Availab ility Zones evenly. With the default termination policy, the be havior of the Auto Scaling group is as follows: 1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not pro tected from scale in. If there is more than one Ava ilability Zone with this number of instances, choose the Avai lability Zone with the instances that use the oldes t launch configuration. 2. Determine which unprotected instances in the sel ected Availability Zone use the oldest launch configuration. If there is one such instance, termi nate it. 3. If there are multiple instances to terminate bas ed on the above criteria, determine which unprotect ed instances are closest to the next billing hour. (Th is helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is on e such instance, terminate it. 4. If there is more than one unprotected instance c losest to the next billing hour, choose one of thes e instances at random. The following flow diagram illustrates how the defa ult termination policy works: References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html#default-termination - policy https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A financial application is composed of an Auto Scal ing group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ De ployments configuration. To protect the confidential data of your customers, you have to en sure that your RDS database can only be accessed us ing the profile credentials specific to your EC2 instan ces via an authentication token. As the Solutions Architect of the company, which of the following should you do to meet the above requirement?", "options": [ "A. Create an IAM Role and assign it to your EC2 i nstances which will grant exclusive access to", "B. Enable the IAM DB Authentication.", "C. Configure SSL in your application to encrypt t he database connection to RDS.", "D. Use a combination of IAM and STS to restrict a ccess to your RDS instance via a temporary" ], "correct": "B. Enable the IAM DB Authentication.", "explanation": "Explanation You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of chara cters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signa ture Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials i n the database, because authentication is managed externally using IAM. You can also still use standa rd database authentication. IAM database authentication provides the following benefits: Network traffic to and from the database is encrypt ed using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for gre ater security Hence, enabling IAM DB Authentication is the correc t answer based on the above reference. Configuring SSL in your application to encrypt the database connection to RDS is incorrect because an SSL connection is not using an authentication to ken from IAM. Although configuring SSL to your application can improve the security of your data i n flight, it is still not a suitable option to use in this scenario. Creating an IAM Role and assigning it to your EC2 i nstances which will grant exclusive access to your RDS instance is incorrect because although you can create and assign an IAM Role to your EC2 instances, you still need to configure your RDS to use IAM DB Authentication. Using a combination of IAM and STS to restrict acce ss to your RDS instance via a temporary token is incorrect because you have to use IAM DB Authent ication for this scenario, and not a combination of an IAM and STS. Although STS is used to send temporary tokens for authentication, this is not a compatibl e use case for RDS.", "references": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" }, { "question": ": A pharmaceutical company has resources hosted on bo th their on-premises network and in AWS cloud. They want all of their Software Architects to acces s resources on both environments using their on- premises credentials, which is stored in Active Dir ectory. In this scenario, which of the following can be use d to fulfill this requirement?", "options": [ "A. Set up SAML 2.0-Based Federation by using a We b Identity Federation.", "B. Set up SAML 2.0-Based Federation by using a Mi crosoft Active Directory Federation", "C. Use Amazon VPC", "D. Use IAM users" ], "correct": "B. Set up SAML 2.0-Based Federation by using a Mi crosoft Active Directory Federation", "explanation": "Explanation Since the company is using Microsoft Active Directo ry which implements Security Assertion Markup Language (SAML), you can set up a SAML-Based Federa tion for API Access to your AWS cloud. In this way, you can easily connect to AWS using the login credentials of your on-premises network. AWS supports identity federation with SAML 2.0, an open standard that many identity providers (IdPs) u se. This feature enables federated single sign-on (SSO) , so users can log into the AWS Management Console or call the AWS APIs without you having to create a n IAM user for everyone in your organization. By using SAML, you can simplify the process of configu ring federation with AWS, because you can use the IdP's service instead of writing custom identity pr oxy code. Before you can use SAML 2.0-based federation as des cribed in the preceding scenario and diagram, you must configure your organization's IdP and your AWS account to trust each other. The general process f or configuring this trust is described in the followin g steps. Inside your organization, you must have an IdP that supports SAML 2.0, like Microsoft Active Directory Federation Service (AD FS, part of Windows Server), Shibboleth, or another compatible SAML 2.0 provider. Hence, the correct answer is: Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service (AD FS). Setting up SAML 2.0-Based Federation by using a Web Identity Federation is incorrect because this is primarily used to let users sign in via a well-know n external identity provider (IdP), such as Login w ith Amazon, Facebook, Google. It does not utilize Active Direct ory. Using IAM users is incorrect because the situation requires you to use the existing credentials stored in their Active Directory, and not user accounts that will b e generated by IAM. Using Amazon VPC is incorrect because this only let s you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a v irtual network that you define. This has nothing to do with user authentication or Active Directory. References: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ roles_providers_saml.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company has 3 DevOps engineers that are handling its software development and infrastructure management processes. One of the engineers accident ally deleted a file hosted in Amazon S3 which has caused disruption of service. What can the DevOps engineers do to prevent this fr om happening again?", "options": [ "A. Set up a signed URL for all users.", "B. Use S3 Infrequently Accessed storage to store the data.", "C. Create an IAM bucket policy that disables dele te operation.", "D. Enable S3 Versioning and Multi-Factor Authenti cation Delete on the bucket.(Correct)" ], "correct": "D. Enable S3 Versioning and Multi-Factor Authenti cation Delete on the bucket.(Correct)", "explanation": "Explanation To avoid accidental deletion in Amazon S3 bucket, y ou can: - Enable Versioning - Enable MFA (Multi-Factor Authentication) Delete Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versio ning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both uninte nded user actions and application failures. If the MFA (Multi-Factor Authentication) Delete is enabled, it requires additional authentication for either of the following operations: - Change the versioning state of your bucket - Permanently delete an object version Using S3 Infrequently Accessed storage to store the data is incorrect. Switching your storage class to S3 Infrequent Access won't help mitigate accidental de letions. Setting up a signed URL for all users is incorrect. Signed URLs give you more control over access to your content, so this feature deals more on accessi ng rather than deletion. Creating an IAM bucket policy that disables delete operation is incorrect. If you create a bucket poli cy preventing deletion, other users won't be able to d elete objects that should be deleted. You only want to prevent accidental deletion, not disable the action itself.", "references": "http://docs.aws.amazon.com/AmazonS3/latest/dev/Vers ioning.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" }, { "question": ": An application that records weather data every minu te is deployed in a fleet of Spot EC2 instances and uses a MySQL RDS database instance. Currently, there is on ly one RDS instance running in one Availability Zone. You plan to improve the database to ensure hi gh availability by synchronous data replication to another RDS instance. Which of the following performs synchronous data re plication in RDS?", "options": [ "A. CloudFront running as a Multi-AZ deployment", "B. DynamoDB Read Replica", "C. RDS DB instance running as a Multi-AZ deployme nt", "D. RDS Read Replica" ], "correct": "C. RDS DB instance running as a Multi-AZ deployme nt", "explanation": "Explanation When you create or modify your DB instance to run a s a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronou s standby replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated ac ross Availability Zones to the standby in order to keep both in sync and protect your latest database updates ag ainst DB instance failure. RDS Read Replica is incorrect as a Read Replica pro vides an asynchronous replication instead of synchronous. DynamoDB Read Replica and CloudFront running as a M ulti-AZ deployment are incorrect as both DynamoDB and CloudFront do not have a Read Replica feature.", "references": "https://aws.amazon.com/rds/details/multi-az/ Amazon RDS Overview: https://youtu.be/aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" }, { "question": ": A Solutions Architect identified a series of DDoS a ttacks while monitoring the VPC. The Architect need s to fortify the current cloud infrastructure to protect the data of the clients. Which of the following is the most suitable solutio n to mitigate these kinds of attacks?", "options": [ "A. Use AWS Shield Advanced to detect and mitigate DDoS attacks.", "B. A combination of Security Groups and Network A ccess Control Lists to only allow", "C. Set up a web application firewall using AWS WA F to filter, monitor, and block HTTP", "D. Using the AWS Firewall Manager, set up a secur ity layer that will prevent SYN floods, UDP" ], "correct": "A. Use AWS Shield Advanced to detect and mitigate DDoS attacks.", "explanation": "Explanation For higher levels of protection against attacks tar geting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), A mazon CloudFront, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced . In addition to the network and transport layer protections that come with Standard, AWS Shield Adv anced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall. AWS Shield Advanced also gives you 24x7 access to t he AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amaz on Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 charges. Hence, the correct answer is: Use AWS Shield Advanc ed to detect and mitigate DDoS attacks. The option that says: Using the AWS Firewall Manage r, set up a security layer that will prevent SYN floods, UDP reflection attacks and other DDoS attac ks is incorrect because AWS Firewall Manager is mainly used to simplify your AWS WAF administration and maintenance tasks across multiple accounts and resources. It does not protect your VPC against DDoS attacks. The option that says: Set up a web application fire wall using AWS WAF to filter, monitor, and block HTTP traffic is incorrect. Even though AWS WAF can help you block common attack patterns to your VPC such as SQL injection or cross-site scripting, this is still not enough to withstand DDoS attacks. It is better to use AWS Shield in this scenario. The option that says: A combination of Security Gro ups and Network Access Control Lists to only allow authorized traffic to access your VPC is inco rrect. Although using a combination of Security Groups and NACLs are valid to provide security to y our VPC, this is not enough to mitigate a DDoS atta ck. You should use AWS Shield for better security prote ction. References: https://d1.awsstatic.com/whitepapers/Security/DDoS_ White_Paper.pdf https://aws.amazon.com/shield/ Check out this AWS Shield Cheat Sheet: https://tutorialsdojo.com/aws-shield/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", "references": "" }, { "question": ": A travel photo sharing website is using Amazon S3 t o serve high-quality photos to visitors of your web site. After a few days, you found out that there are other trav el websites linking and using your photos. This res ulted in financial losses for your business. What is the MOST effective method to mitigate this issue? \u00b7 Use CloudFront distributions for your photos. \u00b7 Block the IP addresses of the offending websites us ing NACL. \u00b7 Configure your S3 bucket to remove public read acce ss and use pre-signed URLs with expiry dates. (Correct) \u00b7 Store and privately serve the high-quality photos o n Amazon WorkDocs instead.", "options": [ "A.", "B.", "C.", "D." ], "correct": "", "explanation": "Explanation/Reference:", "references": "" }, { "question": ": The company that you are working for has a highly a vailable architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scal ing in three Availability Zones. You want to monito r your EC2 instances based on a particular metric, which i s not readily available in CloudWatch. Which of the following is a custom metric in CloudW atch which you have to manually set up?", "options": [ "A. Network packets out of an EC2 instance", "B. CPU Utilization of an EC2 instance", "C. Disk Reads activity of an EC2 instance", "D. Memory Utilization of an EC2 instance" ], "correct": "D. Memory Utilization of an EC2 instance", "explanation": "Explanation CloudWatch has available Amazon EC2 Metrics for you to use for monitoring. CPU Utilization identifies the processing power required to run an application upon a selected instance. Network Utilization iden tifies the volume of incoming and outgoing network traffic to a single instance. Disk Reads metric is used to det ermine the volume of the data the application reads from t he hard disk of the instance. This can be used to d etermine the speed of the application. However, there are ce rtain metrics that are not readily available in Clo udWatch such as memory utilization, disk space utilization, and many others which can be collected by setting up a custom metric. You need to prepare a custom metric using CloudWatc h Monitoring Scripts which is written in Perl. You can also install CloudWatch Agent to collect more s ystem-level metrics from Amazon EC2 instances. Here's the list of custom metrics that you can set up: - Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log co llection CPU Utilization of an EC2 instance, Disk Reads acti vity of an EC2 instance, and Network packets out of an EC2 instance are all incorrect because these metrics are readily available in CloudWatch by defa ult. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html#using_put_script Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", "references": "" }, { "question": ": A Solutions Architect needs to make sure that the O n-Demand EC2 instance can only be accessed from thi s IP address (110.238.98.71) via an SSH connection. Whic h configuration below will satisfy this requirement ?", "options": [ "A. Security Group Inbound Rule: Protocol UDP, Po rt Range 22, Source 110.238.98.71/32", "B. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 110.238.98.71/0", "C. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 110.238.98.71/32", "D. Security Group Inbound Rule: Protocol UDP, Po rt Range 22, Source 110.238.98.71/0" ], "correct": "C. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 110.238.98.71/32", "explanation": "Explanation A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. The requirement is to only allow the individual IP of the client and not the entire network. Therefore , the proper CIDR notation should be used. The /32 denote s one IP address and the /0 refers to the entire network. Take note that the SSH protocol uses TCP a nd port 22. Hence, the correct answer is: Protocol TCP, Port R ange 22, Source 110.238.98.71/32 Protocol UDP, Port Range 22, Source 110.238.98.71 /32 and Protocol UDP, Port Range 22, Source 110.238.98.71/0 are incorrect as they are us ing UDP. Protocol TCP, Port Range 22, Source 110.238.98.71 /0 is incorrect because it uses a /0 CIDR notation. Protocol TCP, Port Range 22, Source 110.238.98.71 /0 is incorrect because it allows the entire network instead of a single IP. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-network-security.html#security-group- rules Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": An online cryptocurrency exchange platform is hoste d in AWS which uses ECS Cluster and RDS in Multi- AZ Deployments configuration. The application is he avily using the RDS instance to process complex rea d and write database operations. To maintain the reliabil ity, availability, and performance of your systems, you have to closely monitor how the different processes or thre ads on a DB instance use the CPU, including the percentage of the CPU bandwidth and t otal memory consumed by each process. Which of the following is the most suitable solutio n to properly monitor your database? certkingdom", "options": [ "A. Use Amazon CloudWatch to monitor the CPU Utili zation of your database.", "B. Create a script that collects and publishes cu stom metrics to CloudWatch, which tracks the", "C. Enable Enhanced Monitoring in RDS.", "D. Check the CPU% and MEM% metrics which are read ily available in the Amazon RDS" ], "correct": "C. Enable Enhanced Monitoring in RDS.", "explanation": "Explanation certkingdom certkingdom certkingdom Amazon RDS provides metrics in real time for the op erating system (OS) that your DB instance runs on. certkingdom You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring certkingdom certkingdom JSON output from CloudWatch Logs in a monitoring sy stem of your choice. By default, Enhanced certkingdom Monitoring metrics are stored in the CloudWatch Log s for 30 days. To modify the amount of time the certkingdom metrics are stored in the CloudWatch Logs, change t he retention for the RDSOSMetrics log group in the certkingdom CloudWatch console. certkingdom certkingdom Take note that there are certain differences betwee n CloudWatch and Enhanced Monitoring Metrics. certkingdom CloudWatch gathers metrics about CPU utilization fr om the hypervisor for a DB instance, and Enhanced certkingdom Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences certkingdom between the measurements, because the hypervisor la yer performs a small amount of work. Hence, certkingdom certkingdom enabling Enhanced Monitoring in RDS is the correct answer in this specific scenario. certkingdom certkingdom certkingdom certkingdom certkingdom The differences can be greater if your DB instances use smaller instance classes, because then there a re likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance. En hanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. Using Amazon CloudWatch to monitor the CPU Utilizat ion of your database is incorrect because although you can use this to monitor the CPU Utiliz ation of your database instance, it does not provid e the percentage of the CPU bandwidth and total memory consumed by each database process in your RDS instance. Take note that CloudWatch gathers metrics about CPU utilizati on from the hypervisor for a DB instance while RDS Enhanced Monitoring gathers its metrics from an age nt on the instance. The option that says: Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS ins tance and then set up a custom CloudWatch dashboard to view the metrics is incorrect because although you can use Amazon CloudWatch Logs and CloudWatch dashboard to monitor the CPU Utilization of the database instance, using CloudWatch alone i s still not enough to get the specific percentage of the CP U bandwidth and total memory consumed by each database processes. The data provided by CloudWatch is not as detailed as compared with the Enhanced Monitoring feature in RDS. Take note as well that y ou do not have direct access to the instances/serve rs of your RDS database instance, unlike with your EC2 in stances where you can install a CloudWatch agent or a custom script to get CPU and memory utilization of your instance. The option that says: Check the CPU% and MEM% metri cs which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance is in correct because the CPU% and MEM% metrics are not readily available in the Amazon RDS console, wh ich is contrary to what is being stated in this opt ion. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_Monitoring.OS.html#USER_Monitori ng.OS.CloudWatchLogs https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/MonitoringOverview.html#monitoring- cloudwatch Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A government entity is conducting a population and housing census in the city. Each household information uploaded on their online portal is stor ed in encrypted files in Amazon S3. The government assigned its Solutions Architect to set compliance policies that verify sensitive data in a manner tha t meets their compliance standards. They should also be alerted i f there are compromised files detected containing personally identifiable information (PII), protecte d health information (PHI) or intellectual properti es (IP). Which of the following should the Architect impleme nt to satisfy this requirement?", "options": [ "A. Set up and configure Amazon Macie to monitor a nd detect usage patterns on their Amazon", "B. Set up and configure Amazon Inspector to send out alert notifications whenever a security", "C. Set up and configure Amazon Rekognition to mon itor and recognize patterns on their", "D. Set up and configure Amazon GuardDuty to monitor malicious activity on their Amazon S3" ], "correct": "A. Set up and configure Amazon Macie to monitor a nd detect usage patterns on their Amazon", "explanation": "Explanation Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as person ally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility i nto where this data is stored and how it is being u sed in your organization. Amazon Macie continuously monitors data access acti vity for anomalies, and delivers alerts when it det ects risk of unauthorized access or inadvertent data leaks. A mazon Macie has ability to detect global access permissions inadvertently being set on sensitive da ta, detect uploading of API keys inside source code , and verify sensitive customer data is being stored and accessed in a manner that meets their compliance standards. Hence, the correct answer is: Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data. The option that says: Set up and configure Amazon R ekognition to monitor and recognize patterns on their Amazon S3 data is incorrect because Rekogniti on is simply a service that can identify the object s, people, text, scenes, and activities, as well as detect any inappropriate content on your images or videos. The option that says: Set up and configure Amazon G uardDuty to monitor malicious activity on their Amazon S3 data is incorrect because GuardDuty is ju st a threat detection service that continuously monitors for malicious activity and unauthorized be havior to protect your AWS accounts and workloads. The option that says: Set up and configure Amazon I nspector to send out alert notifications whenever a security violation is detected on their Amazon S3 data is incorrect because Inspector is basically a n automated security assessment service that helps im prove the security and compliance of applications deployed on AWS. References: https://docs.aws.amazon.com/macie/latest/userguide/ what-is-macie.html https://aws.amazon.com/macie/faq/ https://docs.aws.amazon.com/macie/index.html Check out this Amazon Macie Cheat Sheet: https://tutorialsdojo.com/amazon-macie/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", "references": "" }, { "question": ": An IT consultant is working for a large financial c ompany. The role of the consultant is to help the development team build a highly available web appli cation using stateless web servers. In this scenario, which AWS services are suitable f or storing session state data? (Select TWO.)", "options": [ "A. RDS", "B. Redshift Spectrum", "C. DynamoDB", "D. Glacier" ], "correct": "", "explanation": "Explanation DynamoDB and ElastiCache are the correct answers. Y ou can store session state data on both DynamoDB and ElastiCache. These AWS services provide high-pe rformance storage of key-value pairs which can be used to build a highly available web application. Redshift Spectrum is incorrect since this is a data warehousing solution where you can directly query data from your data warehouse. Redshift is not suitable for s toring session state, but more on analytics and OLA P processes. RDS is incorrect as well since this is a relational database solution of AWS. This relational storage type might not be the best fit for session states, and it migh t not provide the performance you need compared to DynamoDB for the same cost. S3 Glacier is incorrect since this is a low-cost cl oud storage service for data archiving and long-ter m backup. The archival and retrieval speeds of Glacier is too slow for handling session states. References: https://aws.amazon.com/caching/database-caching/ https://aws.amazon.com/caching/session-management/ Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/", "references": "" }, { "question": ": A company has a web application that uses Internet Information Services (IIS) for Windows Server. A fi le share is used to store the application data on the networ k-attached storage of the company's on-premises dat a center. To achieve a highly available system, they plan to migrate the application and file share to A WS. Which of the following can be used to fulfill this requirement?", "options": [ "A. Migrate the existing file share configuration to AWS Storage Gateway.", "B. Migrate the existing file share configuration to Amazon FSx for Windows File Server.", "C. Migrate the existing file share configuration to Amazon EFS.", "D. Migrate the existing file share configuration to Amazon EBS." ], "correct": "B. Migrate the existing file share configuration to Amazon FSx for Windows File Server.", "explanation": "Explanation Amazon FSx for Windows File Server provides fully m anaged Microsoft Windows file servers, backed by a fully native Windows file system. Amazon FSx f or Windows File Server has the features, performanc e, and compatibility to easily lift and shift enterprise a pplications to the AWS Cloud. It is accessible from Windows, Linux, and macOS compute instances and devices. Tho usands of compute instances and devices can access a file system concurrently. In this scenario, you need to migrate your existing file share configuration to the cloud. Among the o ptions given, the best possible answer is Amazon FSx. A fi le share is a specific folder in your file system, including the folder's subfolders, which you make a ccessible to your compute instances via the SMB protocol. To migrate file share configurations from your on-premises file system, you must migrate you r files first to Amazon FSx before migrating your file shar e configuration. Hence, the correct answer is: Migrate the existing file share configuration to Amazon FSx for Windows File Server. The option that says: Migrate the existing file sha re configuration to AWS Storage Gateway is incorrect because AWS Storage Gateway is primarily used to integrate your on-premises network to AWS but not for migrating your applications. Using a fi le share in Storage Gateway implies that you will s till keep your on-premises systems, and not entirely migrate it. The option that says: Migrate the existing file sha re configuration to Amazon EFS is incorrect because it is stated in the scenario that the company is using a file share that runs on a Windows server. Remember that Amazon EFS only supports Linux workloads. The option that says: Migrate the existing file sha re configuration to Amazon EBS is incorrect because EBS is primarily used as block storage for EC2 instances a nd not as a shared file system. A file share is a s pecific folder in a file system that you can access using a server message block (SMB) protocol. Amazon EBS do es not support SMB protocol. References: https://aws.amazon.com/fsx/windows/faqs/ https://docs.aws.amazon.com/fsx/latest/WindowsGuide /migrate-file-share-config-to-fsx.html Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", "references": "" }, { "question": ": A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows sha red file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the give n requirement?", "options": [ "A. Create a Network File System (NFS) file share using AWS Storage Gateway.", "B. Create a file system using Amazon FSx for Wind ows File Server and join it to an Active", "C. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.", "D. Create a file system using Amazon EFS and join it to an Active Directory domain." ], "correct": "B. Create a file system using Amazon FSx for Wind ows File Server and join it to an Active", "explanation": "Explanation Amazon FSx for Windows File Server provides fully m anaged, highly reliable, and scalable file storage that is accessible over the industry-standard Servi ce Message Block (SMB) protocol. It is built on Win dows Server, delivering a wide range of administrative f eatures such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of c ompute instances and devices can access a file system concurrently. Amazon FSx works with Microsoft Active Directory to integrate with your existing Microsoft Windows environments. You have two options to provide user authentication and access control for your file sys tem: AWS Managed Microsoft Active Directory and Self-man aged Microsoft Active Directory. Take note that after you create an Active Directory configuration for a file system, you can't change that configuration. However, you can create a new file s ystem from a backup and change the Active Directory integration configuration for that file system. The se configurations allow the users in your domain to use their existing identity to access the Amazon FSx file sys tem and to control access to individual files and f olders. Hence, the correct answer is: Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS. The option that says: Create a file system using Am azon EFS and join it to an Active Directory domain is incorrect because Amazon EFS does not sup port Windows systems, only Linux OS. You should use Amazon FSx for Windows File Server instead to s atisfy the requirement in the scenario. The option that says: Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume is incorrect because you can't integrate Ama zon S3 with your existing Active Directory to provi de authentication and access control. The option that says: Create a Network File System (NFS) file share using AWS Storage Gateway is incorrect because NFS file share is mainly used for Linux systems. Remember that the requirement in th e scenario is to use a Windows shared file storage. T herefore, you must use an SMB file share instead, w hich supports Windows OS and Active Directory configurat ion. Alternatively, you can also use the Amazon FSx for Windows File Server file system. References: https://docs.aws.amazon.com/fsx/latest/WindowsGuide /aws-ad-integration-fsxW.html https://aws.amazon.com/fsx/windows/faqs/ https://docs.aws.amazon.com/storagegateway/latest/u serguide/CreatingAnSMBFileShare.html Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", "references": "" }, { "question": "A company conducted a surprise IT audit on all of t he AWS resources being used in the production environment. During the audit activities, it was no ted that you are using a combination of Standard an d Convertible Reserved EC2 instances in your applicat ions. Which of the following are the characteristics and benefits of using these two types of Reserved EC2 instances? (Select TWO.)", "options": [ "A. Convertible Reserved Instances allow you to ex change for another convertible reserved", "B. Unused Convertible Reserved Instances can late r be sold at the Reserved Instance", "C. It can enable you to reserve capacity for your Amazon EC2 instances in multiple", "D. It runs in a VPC on hardware that's dedicated to a single customer." ], "correct": "", "explanation": "Explanation Reserved Instances (RIs) provide you with a signifi cant discount (up to 75%) compared to On-Demand instance pricing. You have the flexibility to chang e families, OS types, and tenancies while benefitin g from RI pricing when you use Convertible RIs. One import ant thing to remember here is that Reserved Instanc es are not physical instances, but rather a billing di scount applied to the use of On-Demand Instances in your account. The offering class of a Reserved Instance is either Standard or Convertible. A Standard Reserved Instance provides a more significant discount than a Convertible Reserved Instance, but you can't exchange a Standard Reserved Instance unlike Conver tible Reserved Instances. You can modify Standard and Convertible Reserved Instances. Take note that in Convertible Reserved Instances, you are allowed to exchange another Convertible Reserved instance with a different instance type and tenancy. The configuration of a Reserved Instance comprises a single instance type, platform, scope, and tenanc y over a term. If your computing needs change, you mi ght be able to modify or exchange your Reserved Instance. When your computing needs change, you can modify yo ur Standard or Convertible Reserved Instances and continue to take advantage of the billing benefit. You can modify the Availability Zone, scope, networ k platform, or instance size (within the same instance type) of your Reserved Instance. You can also sell your unu sed instance for Standard RIs but not Convertible RIs o n the Reserved Instance Marketplace. Hence, the correct options are: - Unused Standard Reserved Instances can later be s old at the Reserved Instance Marketplace. - Convertible Reserved Instances allow you to exchang e for another convertible reserved instance of a different instance family. The option that says: Unused Convertible Reserved I nstances can later be sold at the Reserved Instance Marketplace is incorrect. This is not poss ible. Only Standard RIs can be sold at the Reserved Instance Marketplace. The option that says: It can enable you to reserve capacity for your Amazon EC2 instances in multiple Availability Zones and multiple AWS Regions for any duration is incorrect because you can reserve capacity to a specific AWS Region (regional Reserve d Instance) or specific Availability Zone (zonal Reserved Instance) only. You cannot reserve capacit y to multiple AWS Regions in a single RI purchase. The option that says: It runs in a VPC on hardware that's dedicated to a single customer is incorrect because that is the description of a Dedicated inst ance and not a Reserved Instance. A Dedicated insta nce runs in a VPC on hardware that's dedicated to a sin gle customer. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-modifying.html https://aws.amazon.com/ec2/pricing/reserved-instanc es/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-reserved-instances.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /reserved-instances-types.html Amazon EC2 Overview: https://youtu.be/7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A media company has an Amazon ECS Cluster, which us es the Fargate launch type, to host its news website. The database credentials should be supplie d using environment variables, to comply with stric t security compliance. As the Solutions Architect, yo u have to ensure that the credentials are secure an d that they cannot be viewed in plaintext on the cluster i tself. Which of the following is the most suitable solutio n in this scenario that you can implement with mini mal effort?", "options": [ "A. In the ECS task definition file of the ECS Clu ster, store the database credentials using", "B. Use the AWS Systems Manager Parameter Store to keep the database credentials and then", "C. environment variable to set in the container and the full ARN of the Systems Manager", "D. C. Store the database credentials in the ECS task definition file of the ECS Cluster and encrypt" ], "correct": "B. Use the AWS Systems Manager Parameter Store to keep the database credentials and then", "explanation": "Explanation Amazon ECS enables you to inject sensitive data int o your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. This feature is supported by tasks using both the EC2 a nd Fargate launch types. Secrets can be exposed to a container in the follow ing ways: - To inject sensitive data into your containers as environment variables, use the secrets container de finition parameter. - To reference sensitive information in the log con figuration of a container, use the secretOptions co ntainer definition parameter. Within your container definition, specify secrets w ith the name of the environment variable to set in the container and the full ARN of either the Secrets Ma nager secret or Systems Manager Parameter Store parameter containing the sensitive data to present to the container. The parameter that you reference can be from a different Region than the container using it , but must be from within the same account. Hence, the correct answer is the option that says: Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role (taskRoleArn) and re ference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the se nsitive data to present to the container. The option that says: In the ECS task definition fi le of the ECS Cluster, store the database credentia ls using Docker Secrets to centrally manage these sens itive data and securely transmit it to only those containers that need access to it. Secrets are encr ypted during transit and at rest. A given secret is only accessible to those services which have been g ranted explicit access to it via IAM Role, and only while those service tasks are running is incorrect. Although you can use Docker Secrets to secure the sensitive database credentials, this feature is onl y applicable in Docker Swarm. In AWS, the recommend ed way to secure sensitive data is either through the use of Secrets Manager or Systems Manager Parameter Store. The option that says: Store the database credential s in the ECS task definition file of the ECS Cluste r and encrypt it with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definition script that allows access to the specifi c S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-defini tion. Reference the task definition JSON file in the S3 bucket which contains the database credentia ls is incorrect. Although the solution may work, it is not recommended to store sensitive credentials in S3. T his entails a lot of overhead and manual configuration steps which can be simplified by simp ly using the Secrets Manager or Systems Manager Parameter Store. The option that says: Use the AWS Secrets Manager t o store the database credentials and then encrypt them using AWS KMS. Create a resource-based policy for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definiti on, specify secrets with the name of the environment variable to set in the container and th e full ARN of the Secrets Manager secret which contains the sensitive data, to present to the cont ainer is incorrect. Although the use of Secrets Man ager in securing sensitive data in ECS is valid, Amazon ECS doesn't support resource-based policies. An example of a resource-based policy is the S3 bucket policy. An ECS task assumes an execution role (IAM role) to be able to call other AWS services like AWS Secr ets Manager on your behalf. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/specifying-sensitive-data.html https://aws.amazon.com/blogs/mt/the-right-way-to-st ore-secrets-using-parameter-store/ Check out these Amazon ECS and AWS Systems Manager Cheat Sheets: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/ https://tutorialsdojo.com/aws-systems-manager/", "references": "" }, { "question": ": A company needs to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. The architecture must be highly available and fault-tolerant as it is processing mi ssion-critical workloads. As the Solutions Architect of the company, what sho uld you do to meet the above requirement?", "options": [ "A. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the", "B. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the", "C. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the", "D. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the" ], "correct": "D. Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the", "explanation": "Explanation Amazon EC2 Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Aut o Scaling groups. You can specify the minimum number of insta nces in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group nev er goes below this size. You can also specify the maximum number of instances in each Auto Scaling gr oup, and Amazon EC2 Auto Scaling ensures that your group never goes above this size. To achieve highly available and fault-tolerant arch itecture for your applications, you must deploy all your instances in different Availability Zones. This wil l help you isolate your resources if an outage occu rs. Take note that to achieve fault tolerance, you need to have redundant resources in place to avoid any system degradation in the event of a server fault o r an Availability Zone outage. Having a fault-toler ant architecture entails an extra cost in running addit ional resources than what is usually needed. This i s to ensure that the mission-critical workloads are processed. Since the scenario requires at least 2 instances to handle regular traffic, you should have 2 instance s running all the time even if an AZ outage occurred. You can use an Auto Scaling Group to automatically scale y our compute resources across two or more Availability Z ones. You have to specify the minimum capacity to 4 instances and the maximum capacity to 6 instances. If each AZ has 2 i nstances running, even if an AZ fails, your system will still run a minimum of 2 instances. Hence, the correct answer in this scenario is: Crea te an Auto Scaling group of EC2 instances and set t he minimum capacity to 4 and the maximum capacity to 6 . Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Deploy 4 instances in Availability Zone A is incorrect because the instances are only deployed in a single Availabilit y Zone. It cannot protect your applications and dat a from datacenter or AZ failures. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Use 2 Availability Zones and deploy 1 instance for each AZ is incorrect. It is required to have 2 instances runni ng all the time. If an AZ outage happened, ASG will launch a new instance on the unaffected AZ. This provisionin g does not happen instantly, which means that for a certain period of time, there will only be 1 running instan ce left. The option that says: Create an Auto Scaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 4. Deploy 2 instances in Availability Zone A and 2 instances in Availability Zone B is incorrect. Although this ful fills the requirement of at least 2 EC2 instances a nd high availability, the maximum capacity setting is wrong . It should be set to 6 to properly handle the peak load. If an AZ outage occurs and the system is at its peak load , the number of running instances in this setup wil l only be 4 instead of 6 and this will affect the performance o f your application. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/what-is-amazon-ec2-auto-scaling.html https://docs.aws.amazon.com/documentdb/latest/devel operguide/regions-and-azs.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A Docker application, which is running on an Amazon ECS cluster behind a load balancer, is heavily usi ng DynamoDB. You are instructed to improve the databas e performance by distributing the workload evenly and using the provisioned throughput efficiently. Which of the following would you consider to implem ent for your DynamoDB table?", "options": [ "A. Use partition keys with low-cardinality attrib utes, which have a few number of distinct", "B. Reduce the number of partition keys in the Dyn amoDB table.", "C. Use partition keys with high-cardinality attri butes, which have a large number of distinct", "D. Avoid using a composite primary key, which is composed of a partition key and a sort key." ], "correct": "C. Use partition keys with high-cardinality attri butes, which have a large number of distinct", "explanation": "Explanation The partition key portion of a table's primary key determines the logical partitions in which a table' s data is stored. This in turn affects the underlying physica l partitions. Provisioned I/O capacity for the tabl e is divided evenly among these physical partitions. Therefore a partition key design that doesn't distribute I/O r equests evenly can create \"hot\" partitions that result in t hrottling and use your provisioned I/O capacity ine fficiently. The optimal usage of a table's provisioned throughp ut depends not only on the workload patterns of individual items, but also on the partition-key des ign. This doesn't mean that you must access all par tition key values to achieve an efficient throughput level, or even that the percentage of accessed partition key values must be high. It does mean that the more distinct p artition key values that your workload accesses, th e more those requests will be spread across the partitione d space. In general, you will use your provisioned throughput more efficiently as the rati o of partition key values accessed to the total num ber of partition key values increases. One example for this is the use of partition keys w ith high-cardinality attributes, which have a large number of distinct values for each item. Reducing the number of partition keys in the Dynamo DB table is incorrect. Instead of doing this, you should actually add more to improve its performance to distribute the I/O requests evenly and not avoi d \"hot\" partitions. Using partition keys with low-cardinality attribute s, which have a few number of distinct values for each item is incorrect because this is the exact op posite of the correct answer. Remember that the mor e distinct partition key values your workload accesse s, the more those requests will be spread across th e partitioned space. Conversely, the less distinct pa rtition key values, the less evenly spread it would be across the partitioned space, which effectively slows the performance. The option that says: Avoid using a composite prima ry key, which is composed of a partition key and a sort key is incorrect because as mentioned, a compo site primary key will provide more partition for th e table and in turn, improves the performance. Hence, it sh ould be used and not avoided. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-partition-key-uniform-load.html https://aws.amazon.com/blogs/database/choosing-the- right-dynamodb-partition-key/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU", "references": "" }, { "question": ": An organization needs to provision a new Amazon EC2 instance with a persistent block storage volume to migrate data from its on-premises network to AWS. T he required maximum performance for the storage volume is 64,000 IOPS. In this scenario, which of the following can be use d to fulfill this requirement?", "options": [ "A. Launch an Amazon EFS file system and mount it to a Nitro-based Amazon EC2 instance", "B. Directly attach multiple Instance Store volume s in an EC2 instance to deliver maximum", "C. Launch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1)", "D. Launch any type of Amazon EC2 instance and att ach a Provisioned IOPS SSD EBS volume (io1) with 64,000 IOPS." ], "correct": "C. Launch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1)", "explanation": "Explanation/Reference: Explanation An Amazon EBS volume is a durable, block-level stor age device that you can attach to your instances. After you attach a volume to an instance, you can u se it as you would use a physical hard drive. EBS volumes are flexible. The AWS Nitro System is the underlying platform for the latest generation of EC2 instances that enable s AWS to innovate faster, further reduce the cost of the customers, and deliver added benefits like increase d security and new instance types. Amazon EBS is a persistent block storage volume. It can persist independently from the life of an inst ance. Since the scenario requires you to have an EBS volu me with up to 64,000 IOPS, you have to launch a Nitro-based EC2 instance. Hence, the correct answer in this scenario is: Laun ch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 I OPS. The option that says: Directly attach multiple Inst ance Store volumes in an EC2 instance to deliver maximum IOPS performance is incorrect. Although an Instance Store is a block storage volume, it is not persistent and the data will be gone if the instanc e is restarted from the stopped state (note that th is is different from the OS-level reboot. In OS-level reboot, data still persists in the instance store). An instance store only provides temporary block-level storage for your ins tance. It means that the data in the instance store can be lost if the underlying disk drive fails, if the ins tance stops, and if the instance terminates. The option that says: Launch an Amazon EFS file sys tem and mount it to a Nitro-based Amazon EC2 instance and set the performance mode to Max I/O is incorrect. Although Amazon EFS can provide over 64,000 IOPS, this solution uses a file system and not a block storage volume which is what is ask ed in the scenario. The option that says: Launch an EC2 instance and at tach an io1 EBS volume with 64,000 IOPS is incorrect. In order to achieve the 64,000 IOPS for a provisioned IOPS SSD, you must provision a Nitro- based EC2 instance. The maximum IOPS and throughput are g uaranteed only on Instances built on the Nitro System provisioned with more than 32,000 IOPS . Other instances guarantee up to 32,000 IOPS only. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-volume-types.html#EBSVolumeTypes_piops https://aws.amazon.com/s3/storage-classes/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-types.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Amazon S3 vs EFS vs EBS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", "references": "" }, { "question": ": A Solutions Architect designed a serverless archite cture that allows AWS Lambda to access an Amazon DynamoDB table named tutorialsdojo in the US East ( N. Virginia) region. The IAM policy attached to a Lambda function allows it to put and delete items i n the table. The policy must be updated to only all ow two operations in the tutorialsdojo table and prevent o ther DynamoDB tables from being modified. Which of the following IAM policies fulfill this re quirement and follows the principle of granting the least privilege? A.", "options": [ "B.", "C.", "D." ], "correct": "B.", "explanation": "Explanation Every AWS resource is owned by an AWS account, and permissions to create or access a resource are governed by permissions policies. An account admini strator can attach permissions policies to IAM identities (that is, users, groups, and roles), and some services (such as AWS Lambda) also support attaching permissions policies to resources. In DynamoDB, the primary resources are tables. Dyna moDB also supports additional resource types, indexes, and streams. However, you can create index es and streams only in the context of an existing DynamoDB table. These are referred to as subresourc es. These resources and subresources have unique Amazon Resource Names (ARNs) associated with them. For example, an AWS Account (123456789012) has a Dy namoDB table named Books in the US East (N. Virginia) (us-east-1) region. The ARN of the Books table would be: arn:aws:dynamodb:us-east-1:123456789012:table/Books A policy is an entity that, when attached to an ide ntity or resource, defines their permissions. By us ing an IAM policy and role to control access, it will gran t a Lambda function access to a DynamoDB table. It is stated in the scenario that a Lambda function will be used to modify the DynamoDB table named tutorialsdojo. Since you only need to access one ta ble, you will need to indicate that table in the re source element of the IAM policy. Also, you must specify t he effect and action elements that will be generate d in the policy. Hence, the correct answer in this scenario is: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 6:table/tutorialsdojo\" } ] } The IAM policy below is incorrect because the scena rio only requires you to allow the permissions in t he tutorialsdojo table. Having a wildcard: table/* in this policy would allow the Lambda function to modi fy all the DynamoDB tables in your account. { { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 6:table/*\" } ]} The IAM policy below is incorrect. The first statem ent is correctly allowing PUT and DELETE actions to the tutorialsdojo DynamoDB table. However, the second s tatement counteracts the first one as it allows all DynamoDB actions in the tutorialsdojo table. { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy1\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteIte m\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" }, { \"Sid\": \"TutorialsdojoTablePolicy2\", \"Effect\": \"Allow\", \"Action\": \"dynamodb:*\", \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" } ] } The IAM policy below is incorrect. Just like the pr evious option, the first statement of this policy i s correctly allowing PUT and DELETE actions to the tutorialsdoj o DynamoDB table. However, the second statement counteracts the first one as it denies al l DynamoDB actions. Therefore, this policy will not allow any actions on all DynamoDB tables of the AWS account. { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"TutorialsdojoTablePolicy1\", \"Effect\": \"Allow\", \"Action\": [ \"dynamodb:PutItem\", \"dynamodb:DeleteItem\" ], \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/tutorialsdojo\" }, { \"Sid\": \"TutorialsdojoTablePolicy2\", \"Effect\": \"Deny\", \"Action\": \"dynamodb:*\", \"Resource\": \"arn:aws:dynamodb:us-east-1:12061898120 61898:table/*\" } ] } References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/using-identity-based-policies.html https://docs.aws.amazon.com/IAM/latest/UserGuide/re ference_policies_examples_lambda-access- dynamodb.html https://aws.amazon.com/blogs/security/how-to-create -an-aws-iam-policy-to-grant-aws-lambda-access-to- an-amazon-dynamodb-table/ Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. T he solution should also be able to audit the key us age independently of AWS CloudTrail. Which of the following options will meet this requi rement?", "options": [ "A. Use AWS Key Management Service to create a CMK in a custom key store and store the", "B. Use AWS Key Management Service to create AWS-o wned CMKs and store the non-", "C. Use AWS Key Management Service to create AWS-m anaged CMKs and store the non-", "D. Use AWS Key Management Service to create a CMK in a custom key store and store the" ], "correct": "", "explanation": "Explanation The AWS Key Management Service (KMS) custom key sto re feature combines the controls provided by AWS CloudHSM with the integration and ease of use o f AWS KMS. You can configure your own CloudHSM cluster and authorize AWS KMS to use it as a dedicated key store for your keys rather than th e default AWS KMS key store. When you create keys in AWS KMS you can choose to generate the key material in your CloudHSM cluster. CMKs that are ge nerated in your custom key store never leave the HSMs in the CloudHSM cluster in plaintext and all A WS KMS operations that use those keys are only performed in your HSMs. AWS KMS can help you integrate with other AWS servi ces to encrypt the data that you store in these services and control access to the keys that decryp t it. To immediately remove the key material from A WS KMS, you can use a custom key store. Take note that each custom key store is associated with an AWS CloudHSM cluster in your AWS account. Therefore, wh en you create an AWS KMS CMK in a custom key store, AWS KMS generates and stores the non-extract able key material for the CMK in an AWS CloudHSM cluster that you own and manage. This is a lso suitable if you want to be able to audit the us age of all your keys independently of AWS KMS or AWS Cl oudTrail. Since you control your AWS CloudHSM cluster, you ha ve the option to manage the lifecycle of your CMKs independently of AWS KMS. There are four reaso ns why you might find a custom key store useful: You might have keys that are explicitly required to be protected in a single-tenant HSM or in an HSM o ver which you have direct control. You might have keys that are required to be stored in an HSM that has been validated to FIPS 140-2 lev el 3 overall (the HSMs used in the standard AWS KMS key store are either validated or in the process of bei ng validated to level 2 with level 3 in multiple categ ories). You might need the ability to immediately remove ke y material from AWS KMS and to prove you have done so by independent means. You might have a requirement to be able to audit al l use of your keys independently of AWS KMS or AWS CloudTrail. Hence, the correct answer in this scenario is: Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM. The option that says: Use AWS Key Management Servic e to create a CMK in a custom key store and store the non-extractable key material in Amazon S3 is incorrect because Amazon S3 is not a suitable storage service to use in storing encryption keys. You have to use AWS CloudHSM instead. The options that say: Use AWS Key Management Servic e to create AWS-owned CMKs and store the non-extractable key material in AWS CloudHSM and Us e AWS Key Management Service to create AWS-managed CMKs and store the non-extractable key material in AWS CloudHSM are both incorrect because the scenario requires you to have full control over the encryption of the created ke y. AWS-owned CMKs and AWS-managed CMKs are managed by AWS. Moreover, these options do not allow you to audit the key usage independently of A WS CloudTrail. References: https://docs.aws.amazon.com/kms/latest/developergui de/custom-key-store-overview.html https://aws.amazon.com/kms/faqs/ https://aws.amazon.com/blogs/security/are-kms-custo m-key-stores-right-for-you/ Check out this AWS KMS Cheat Sheet: https://tutorialsdojo.com/aws-key-management-servic e-aws-kms/", "references": "" }, { "question": ": An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. T he Operations team received 5 orders but after a fe w hours, they saw 20 email notifications in their inbox. Which of the following could be the possible culpri t for this issue?", "options": [ "A. The web application is not deleting the messag es in the SQS queue after it has processed", "B. The web application is set for long polling so the messages are being sent twice.", "C. The web application does not have permission t o consume messages in the SQS queue.", "D. The web application is set to short polling so some messages are not being picked up" ], "correct": "A. The web application is not deleting the messag es in the SQS queue after it has processed", "explanation": "Explanation Always remember that the messages in the SQS queue will continue to exist even after the EC2 instance has processed it, until you delete that message. Yo u have to ensure that you delete the message after processing to prevent the message from being receiv ed and processed again once the visibility timeout expires. There are three main parts in a distributed messagi ng system: 1. The components of your distributed system (EC2 i nstances) 2. Your queue (distributed on Amazon SQS servers) 3. Messages in the queue. You can set up a system which has several component s that send messages to the queue and receive messages from the queue. The queue redundantly stor es the messages across multiple Amazon SQS servers.Refer to the third step of the SQS Message Lifecycl e: Component 1 sends Message A to a queue, and the mes sage is distributed across the Amazon SQS servers redundantly. When Component 2 is ready to process a message, it consumes messages from the queue, and Message A is returned. While Message A is being processed, it remains in the queue and isn't returned to subsequ ent receive requests for the duration of the visibility timeout. Component 2 deletes Message A from the queue to pre vent the message from being received and processed again once the visibility timeout expires. The option that says: The web application is set fo r long polling so the messages are being sent twice is incorrect because long polling helps reduce the cos t of using SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty response s (when messages are available but aren't included in a response). Messages being sent twice in an SQS queue configured with long polling is quite unlikel y. The option that says: The web application is set to short polling so some messages are not being picke d up is incorrect since you are receiving emails from SNS w here messages are certainly being processed. Following the scenario, messages not being picked u p won't result into 20 messages being sent to your inbox. The option that says: The web application does not have permission to consume messages in the SQS queue is incorrect because not having the correct p ermissions would have resulted in a different respo nse. The scenario says that messages were properly processed but there were over 20 messages that were sent, he nce, there is no problem with the accessing the queue. References: https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-message- lifecycle.html https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-basic- architecture.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": A Solutions Architect needs to set up a relational database and come up with a disaster recovery plan to mitigate multi-region failure. The solution require s a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute . Which of the following AWS services can fulfill thi s requirement?", "options": [ "A. AWS Global Accelerator", "B. Amazon Aurora Global Database", "C. Amazon RDS for PostgreSQL with cross-region re ad replicas", "D. Amazon DynamoDB global tables" ], "correct": "B. Amazon Aurora Global Database", "explanation": "Explanation Amazon Aurora Global Database is designed for globa lly distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions . It replicates your data with no impact on databas e performance, enables fast local reads with low late ncy in each region, and provides disaster recovery from region-wide outages. Aurora Global Database supports storage-based repli cation that has a latency of less than 1 second. If there is an unplanned outage, one of the secondary regions y ou assigned can be promoted to read and write capabilities in less than 1 minute. This feature is called Cross-Region Disaster Recovery. An RPO of 1 second and an RTO of less than 1 minute provides you a str ong foundation for a global business continuity pla n. Hence, the correct answer is: Amazon Aurora Global Database. Amazon DynamoDB global tables is incorrect because it is stated in the scenario that the Solutions Architect needs to create a relational database and not a NoSQL database. When you create a DynamoDB global table, it consists of multiple replica table s (one per AWS Region) that DynamoDB treats as a si ngle unit. Multi-AZ Amazon RDS database with cross-region read replicas is incorrect because a Multi-AZ deployment is only applicable inside a single regio n and not in a multi-region setup. This database se tup is not capable of providing an RPO of 1 second and an RTO of less than 1 minute. Moreover, the replication of cross- region RDS Read Replica is not as fast compared wit h Amazon Aurora Global Databases. AWS Global Accelerator is incorrect because this is a networking service that simplifies traffic management and improves application performance. AW S Global Accelerator is not a relational database service; therefore, this is not a suitable service to use in this scenario. References: https://aws.amazon.com/rds/aurora/global-database/ https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/aurora-global-database.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", "references": "" }, { "question": ":A Solutions Architect is hosting a website in an Am azon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://tutorialsdo jo.s3-website-us-east-1.amazonaws.com and there is a new requirement to add a JavaScript on the webpages in order to make authenticated HTTP GET requests against the same bucket by using the Amazon S3 API endpoint (tutorialsdojo.s3.amazonaws.com). Upon testing, you noticed that the web browser blocks Ja vaScript from allowing those requests. Which of the following options is the MOST suitable solution that you should implement for this scenar io? \u00b7 Enable Cross-Region Replication (CRR). \u00b7 Enable Cross-origin resource sharing (CORS) configu ration in the bucket. (Correct) \u00b7 Enable cross-account access. \u00b7 Enable Cross-Zone Load Balancing.", "options": [ "A.", "B.", "C.", "D." ], "correct": "", "explanation": "Explanation/Reference:", "references": "" }, { "question": ": A multi-tiered application hosted in your on-premis es data center is scheduled to be migrated to AWS. The application has a message broker service which uses industry standard messaging APIs and protocols tha t must be migrated as well, without rewriting the mes saging code in your application. Which of the following is the most suitable service that you should use to move your messaging service to AWS?", "options": [ "A. Amazon SNS B. Amazon MQ", "C. Amazon SWF", "D. Amazon SQS" ], "correct": "", "explanation": "Explanation Amazon MQ, Amazon SQS, and Amazon SNS are messaging services that are suitable for anyone from startups to enterprises. If you're using messaging with existing applications and want to move your messaging service to the cloud quickly and easily, it is recommended that you consider Amazon MQ. It supports industry-standard APIs and protocols so yo u can switch from any standards-based message broke r to Amazon MQ without rewriting the messaging code i n your applications. Hence, Amazon MQ is the correct answer. If you are building brand new applications in the c loud, then it is highly recommended that you consid er Amazon SQS and Amazon SNS. Amazon SQS and SNS are lightwei ght, fully managed message queue and topic services that scale almost infinitely and pro vide simple, easy-to-use APIs. You can use Amazon S QS and SNS to decouple and scale microservices, distribute d systems, and serverless applications, and improve reliability. Amazon SQS is incorrect because although this is a fully managed message queuing service, it does not support an extensive list of industry-standard mess aging APIs and protocol, unlike Amazon MQ. Moreover , using Amazon SQS requires you to do additional chan ges in the messaging code of applications to make i t compatible. Amazon SNS is incorrect because SNS is more suitabl e as a pub/sub messaging service instead of a message broker service. Amazon SWF is incorrect because this is a fully-man aged state tracker and task coordinator service and not a messaging service, unlike Amazon MQ, AmazonSQS and Amazon SNS. References: https://aws.amazon.com/amazon-mq/faqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/welcome.html#sqs- difference-from-amazon-mq-sns Check out this Amazon MQ Cheat Sheet: https://tutorialsdojo.com/amazon-mq/", "references": "" }, { "question": ": A company hosts multiple applications in their VPC. While monitoring the system, they noticed that multiple port scans are coming in from a specific I P address block that is trying to connect to severa l AWS resources inside their VPC. The internal security t eam has requested that all offending IP addresses b e denied for the next 24 hours for security purposes. Which of the following is the best method to quickl y and temporarily deny access from the specified IP addresses?", "options": [ "A. Configure the firewall in the operating system of the EC2 instances to deny access from the", "B. Add a rule in the Security Group of the EC2 in stances to deny access from the IP Address", "C. Modify the Network Access Control List associa ted with all public subnets in the VPC to", "D. Create a policy in IAM to deny access from the IP Address block." ], "correct": "C. Modify the Network Access Control List associa ted with all public subnets in the VPC to", "explanation": "Explanation To control the traffic coming in and out of your VP C network, you can use the network access control l ist (ACL). It is an optional layer of security for your VPC th at acts as a firewall for controlling traffic in an d out of one or more subnets. This is the best solution among other options as you can easily add and remove the restr iction in a matter of minutes. Creating a policy in IAM to deny access from the IP Address block is incorrect as an IAM policy does not control the inbound and outbound traffic of you r VPC. Adding a rule in the Security Group of the EC2 inst ances to deny access from the IP Address block is incorrect. Although a Security Group acts as a fire wall, it will only control both inbound and outboun d traffic at the instance level and not on the whole VPC. Configuring the firewall in the operating system of the EC2 instances to deny access from the IP address block is incorrect because adding a firewal l in the underlying operating system of the EC2 instance is not enough; the attacker can just conne ct to other AWS resources since the network access control list still allows them to do so.", "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_ACLs.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" }, { "question": ": A Forex trading platform, which frequently processe s and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle dat abase. Due to a recent cooling problem in their dat a center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that th e database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solutio n to meet the requirement?", "options": [ "A. Create an Oracle database in RDS with Multi-AZ deployments.", "B. Launch an Oracle Real Application Clusters (RA C) in RDS.", "C. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled.", "D. Convert the database schema using the AWS Sche ma Conversion Tool and AWS Database" ], "correct": "A. Create an Oracle database in RDS with Multi-AZ deployments.", "explanation": "Explanation Amazon RDS Multi-AZ deployments provide enhanced av ailability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a pri mary DB Instance and synchronously replicates the data to a standby instance in a different Availabil ity Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engine ered to be highly reliable. In case of an infrastructure failure, Amazon RDS pe rforms an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your D B Instance remains the same after a failover, your application can resume database operation without t he need for manual administrative intervention. In this scenario, the best RDS configuration to use is an Oracle database in RDS with Multi-AZ deployments to ensure high availability even if the primary database instance goes down. Hence, creati ng an Oracle database in RDS with Multi-AZ deployments is the correct answer. Launching an Oracle database instance in RDS with R ecovery Manager (RMAN) enabled and launching an Oracle Real Application Clusters (RAC) in RDS are incorrect because Oracle RMAN and RAC are not supported in RDS. The option that says: Convert the database schema u sing the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle data base to a non-cluster Amazon Aurora with a single instance is incorrect because although this solution is feasible, it takes time to migrate your Oracle database to Aurora, which is not acceptable. Based on this option, the Aurora database is only using a single instance with no Read Replica and is not configured as an Amazon Aurora DB cluster, which could have improved the availability of the database. References: https://aws.amazon.com/rds/details/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": An application is hosted in an AWS Fargate cluster that runs a batch job whenever an object is loaded on an Amazon S3 bucket. The minimum number of ECS Tasks i s initially set to 1 to save on costs, and it will only increase the task count based on the new objects up loaded on the S3 bucket. Once processing is done, t he bucket becomes empty and the ECS Task count should be back to 1. Which is the most suitable option to implement with the LEAST amount of effort?", "options": [ "A. Set up an alarm in CloudWatch to monitor Cloud Trail since the S3 object-level operations", "B. Set up a CloudWatch Event rule to detect S3 ob ject PUT operations and set the target to a", "C. Set up a CloudWatch Event rule to detect S3 ob ject PUT operations and set the target to the", "D. Set up an alarm in CloudWatch to monitor Cloud Trail since this S3 object-level operations" ], "correct": "C. Set up a CloudWatch Event rule to detect S3 ob ject PUT operations and set the target to the", "explanation": "Explanation You can use CloudWatch Events to run Amazon ECS tas ks when certain AWS events occur. You can set up a CloudWatch Events rule that runs an Amazon ECS task whenever a file is uploaded to a certain Amazon S3 bucket using the Amazon S3 PUT operation. You can also declare a reduced number of ECS tasks whenever a file is deleted on the S3 bucket u sing the DELETE operation. First, you must create a CloudWatch Events rule for the S3 service that will watch for object-level operations PUT and DELETE objects. For object-leve l operations, it is required to create a CloudTrail trail first. On the Targets section, select the \"ECS task\" and i nput the needed values such as the cluster name, ta sk definition and the task count. You need two rules one for the scale-up and another for the scale-down of the ECS task count. Hence, the correct answer is: Set up a CloudWatch E vent rule to detect S3 object PUT operations and set the target to the ECS cluster with the increase d number of tasks. Create another rule to detect S3 DELETE operations and set the target to the ECS Cluster wi th 1 as the Task count. The option that says: Set up a CloudWatch Event rul e to detect S3 object PUT operations and set the target to a Lambda function that will run Amazon EC S API command to increase the number of tasks on ECS. Create another rule to detect S3 DELE TE operations and run the Lambda function to reduce the number of ECS tasks is incorrect. Althou gh this solution meets the requirement, creating yo ur own Lambda function for this scenario is not really necessary. It is much simple r to control ECS task directly as target for the CloudWatch Event rule. Take note that the scenario asks for a solution that is the easiest to implemen t. The option that says: Set up an alarm in CloudWatch to monitor CloudTrail since the S3 object-level operations are recorded on CloudTrail. Create two L ambda functions for increasing/decreasing the ECS task count. Set these as respective targets for the CloudWatch Alarm depending on the S3 event is incorrect because using CloudTrail, CloudWatch A larm, and two Lambda functions creates an unnecessary complexity to what you want to achieve. CloudWatch Events can directly target an ECS task on the Targets section when you create a new rule. The option that says: Set up an alarm in CloudWatch to monitor CloudTrail since this S3 object-level operations are recorded on CloudTrail. Set two alar m actions to update ECS task count to scale- out/scale-in depending on the S3 event is incorrect because you can't directly set CloudWatch Alarms t o update the ECS task count. You have to use CloudWatch Even ts instead. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /events/CloudWatch-Events-tutorial-ECS.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /events/Create-CloudWatch-Events-Rule.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Amazon CloudWatch Overview: https://www.youtube.com/watch?v=q0DmxfyGkeU", "references": "" }, { "question": ": In a government agency that you are working for, yo u have been assigned to put confidential tax documents on AWS cloud. However, there is a concern from a security perspective on what can be put on AWS. What are the features in AWS that can ensure data s ecurity for your confidential documents? (Select TW O.)", "options": [ "A. Public Data Set Volume Encryption", "B. S3 On-Premises Data Encryption", "C. S3 Server-Side Encryption", "D. EBS On-Premises Data Encryption" ], "correct": "", "explanation": "Explanation You can secure the privacy of your data in AWS, bot h at rest and in-transit, through encryption. If yo ur data is stored in EBS Volumes, you can enable EBS E ncryption and if it is stored on Amazon S3, you can enable client-side and server-side encryption. Public Data Set Volume Encryption is incorrect as p ublic data sets are designed to be publicly accessi ble. EBS On-Premises Data Encryption and S3 On-Premises Data Encryption are both incorrect as there is no such thing as On-Premises Data Encryption for S3 and EBS as these services are in the AWS cloud and not on your on-premises network. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngEncryption.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-public-data-sets.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A car dealership website hosted in Amazon EC2 store s car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its da ta must be removed from the current listings and forwarded to a distributed processing system. Which of the following options can satisfy the give n requirement?", "options": [ "A. Create an RDS event subscription and send the notifications to AWS Lambda. Configure", "B. Create an RDS event subscription and send the notifications to Amazon SNS. Configure the SNS topic to fan out the event notifications to mul tiple Amazon SQS queues. Process the data", "C. Create a native function or a stored procedure that invokes a Lambda function. Configure", "D. Create an RDS event subscription and send the notifications to Amazon SQS. Configure the", "A. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and", "B. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and", "C. Attach an instance store volume in your EC2 in stance. Use Amazon S3 to store your backup", "D. Attach an instance store volume in your existi ng EC2 instance. Use Amazon S3 to store your" ], "correct": "B. Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and", "explanation": "Explanation Amazon Elastic Block Store (EBS) is an easy-to-use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications , big data analytics engines, file systems, and med ia workflows are widely deployed on Amazon EBS. Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. This means customers of all sizes and indus tries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterpri se applications, IoT devices, and big data analytic s. In an S3 Lifecycle configuration, you can define ru les to transition objects from one storage class to another to save on storage costs. Amazon S3 supports a waterfa ll model for transitioning between storage classes, as shown in the diagram below: In this scenario, three services are required to im plement this solution. The mission-critical workloa ds mean that you need to have a persistent block storage vo lume and the designed service for this is Amazon EB S volumes. The second workload needs to have an objec t storage service, such as Amazon S3, to store your backup data. Amazon S3 enables you to configur e the lifecycle policy from S3 Standard to differen t storage classes. For the last one, it needs archive storage such as Amazon S3 Glacier. Hence, the correct answer in this scenario is: Atta ch an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecy cle policy to transition your objects to Amazon S3 Glacier. The option that says: Attach an EBS volume in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to tra nsition your objects to Amazon S3 One Zone-IA is incorrect because this lifecycle policy will transi tion your objects into an infrequently accessed sto rage class and not a storage class for data archiving. The option that says: Attach an instance store volu me in your existing EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy to transition your objects to Amazon S3 Glacier is incorrect because an Instance Store volu me is simply a temporary block-level storage for EC 2 instances. Also, you can't attach instance store vo lumes to an instance after you've launched it. You can specify the instance store volumes for your instance only w hen you launch it. The option that says: Attach an instance store volu me in your EC2 instance. Use Amazon S3 to store your backup data and configure a lifecycle policy t o transition your objects to Amazon S3 One Zone- IA is incorrect. Just like the previous option, the use of instance store volume is not suitable for m ission- critical workloads because the data can be lost if the under lying disk drive fails, the instance stops, or if t he instance is terminated. In addition, Amazon S3 Glacier is a mor e suitable option for data archival instead of Amaz on S3 One Zone-IA. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Storage Services Cheat Sheets: https://tutorialsdojo.com/aws-cheat-sheets-storage- services/", "references": "" }, { "question": ": A Solutions Architect is working for a company whic h has multiple VPCs in various AWS regions. The Architect is assigned to set up a logging system wh ich will track all of the changes made to their AWS resources in all regions, including the configurati ons made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solu tion must ensure the security, integrity, and durab ility of the log data. It should also provide an event history o f all API calls made in AWS Management Console and AWS CLI. Which of the following solutions is the best fit fo r this scenario?", "options": [ "A. Set up a new CloudTrail trail in a new S3 buck et using the AWS CLI and also pass both the", "B. Set up a new CloudWatch trail in a new S3 buck et using the CloudTrail console and also", "D. Set up a new CloudTrail trail in a new S3 buck et using the AWS CLI and also pass both the" ], "correct": "A. Set up a new CloudTrail trail in a new S3 buck et using the AWS CLI and also pass both the", "explanation": "Explanation An event in CloudTrail is the record of an activity in an AWS account. This activity can be an action taken by a user, role, or service that is monitorable by Cloud Trail. CloudTrail events provide a history of both API and non- API account activity made through the AWS Managemen t Console, AWS SDKs, command- line tools, and other AWS services. There are two t ypes of events that can be logged in CloudTrail: management events and data events. By default, trai ls log management events, but not data events. A trail can be applied to all regions or a single r egion. As a best practice, create a trail that appl ies to all regions in the AWS partition in which you are working. This is the default setting when you create a trail in the CloudTrail console. For most services, events are recorded in the regio n where the action occurred. For global services su ch as AWS Identity and Access Management (IAM), AWS STS, Amazon CloudFront, and Route 53, events are delivered to any trail that includes global service s, and are logged as occurring in US East (N. Virgi nia) Region. In this scenario, the company requires a secure and durable logging solution that will track all of th e activities of all AWS resources in all regions. CloudTrail can be used for this case with multi-region trail enabled , however, it will only cover the activities of the regional serv ices (EC2, S3, RDS etc.) and not for global service s such as IAM, CloudFront, AWS WAF, and Route 53. In order to satisfy the requirement, you have to add the --include-global-service-events parameter in your AWS CLI command. The option that says: Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include -global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Auth entication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the lo gs by configuring the bucket policies is correct because it provides security, integrity, and durabi lity to your log data and in addition, it has the - include- global- service-events parameter enabled which will also in clude activity from global services such as IAM, Ro ute 53, AWS WAF, and CloudFront. The option that says: Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include -global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication ( MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by config uring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. The option that says: Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on t he S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. In addition, the --include-global-service-events parameter is also missing in this setup. The option that says: Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --no-incl ude-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access th e logs by configuring the bucket policies is incorrect because the --is-multi-region-trail is no t enough as you also need to add the --include-glob al- service-events parameter and not --no-include-globa l-service-events. References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-concepts.html#cloudtrail-concept s- global-service-events http://docs.aws.amazon.com/IAM/latest/UserGuide/clo udtrail-integration.html https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-create-and-update-a-trail-by-usi ng- the- aws-cli.html Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", "references": "" }, { "question": ": An online shopping platform is hosted on an Auto Sc aling group of Spot EC2 instances and uses Amazon Aurora PostgreSQL as its database. There is a requi rement to optimize your database workloads in your cluster where you have to direct the write operatio ns of the production traffic to your high-capacity instances and point the reporting queries sent by y our internal staff to the low-capacity instances. Which is the most suitable configuration for your a pplication as well as your Aurora database cluster to achieve this requirement?", "options": [ "A. In your application, use the instance endpoint of your Aurora database to handle the", "B. Configure your application to use the reader e ndpoint for both production traffic and", "C. Do nothing since by default, Aurora will autom atically direct the production traffic to your", "D. Create a custom endpoint in Aurora based on th e specified criteria for the production" ], "correct": "D. Create a custom endpoint in Aurora based on th e specified criteria for the production", "explanation": "Explanation Amazon Aurora typically involves a cluster of DB in stances instead of a single instance. Each connecti on is handled by a specific DB instance. When you conn ect to an Aurora cluster, the host name and port th at you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don't have to hardcode all the hostnames or write your own logic for load-balancing and rerouting connections when some DB instances aren't available. For certain Aurora tasks, different instances or gr oups of instances perform different roles. For exam ple, the primary instance handles all data definition langua ge (DDL) and data manipulation language (DML) statements. Up to 15 Aurora Replicas handle read-on ly query traffic. Using endpoints, you can map each connection to the appropriate instance or group of instances based o n your use case. For example, to perform DDL statements yo u can connect to whichever instance is the primary instance. To perform queries, you can conne ct to the reader endpoint, with Aurora automaticall y performing load-balancing among all the Aurora Repl icas. For clusters with DB instances of different capacities or configurations, you can connect to cu stom endpoints associated with different subsets of DB instances. For diagnosis or tuning, you can connect to a specific instance endpoint to examine details about a specific DB instance. The custom endpoint provides load-balanced database connections based on criteria other than the read- only or read-write capability of the DB instances. For example, you might define a custom endpoint to connect to instances that use a particular AWS inst ance class or a particular DB parameter group. Then you might tell particular groups of users about this cu stom endpoint. For example, you might direct intern al users to low-capacity instances for report generati on or ad hoc (one-time) querying, and direct produc tion traffic to high-capacity instances. Hence, creating a custom endpoint in Aurora based on the specified criteria for the production traffic and another cus tom endpoint to handle the reporting queries is the correct answer. Configuring your application to use the reader endp oint for both production traffic and reporting queries, which will enable your Aurora database to automatically perform load-balancing among all the Aurora Replicas is incorrect because although it is true t hat a reader endpoint enables your Aurora database to automatically perform load-balancing among all the Aurora Replicas, it is quite limited to doing read operations only. You still need to use a custom end point to load-balance the database connections base d on the specified criteria. The option that says: In your application, use the instance endpoint of your Aurora database to handle the incoming production traffic and use the cluster end point to handle reporting queries is incorrect because a cluster endpoint (also known as a writer endpoint) for an Aurora DB cluster simply connects to the current primary DB instance for that DB cluster. Th is endpoint can perform write operations in the dat abase such as DDL statements, which is perfect for handli ng production traffic but not suitable for handling queries for reporting since there will be no write database ope rations that will be sent. Moreover, the endpoint d oes not point to lower-capacity or high-capacity instances as per the requirement. A better solution for this is to use a custom endpoint. The option that says: Do nothing since by default, Aurora will automatically direct the production traffic to your high-capacity instances and the rep orting queries to your low-capacity instances is incorrect because Aurora does not do this by defaul t. You have to create custom endpoints in order to accomplish this requirement.", "references": "https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.Endpoints.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/" }, { "question": ": A company is using Amazon S3 to store frequently ac cessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Am azon SQS queue. A solutions architect needs to crea te a solution that will notify the development and opera tions team about the created or deleted objects. Which of the following would satisfy this requireme nt?", "options": [ "A. Create a new Amazon SNS FIFO topic for the oth er team. Grant Amazon S3 permission to", "B. Set up another Amazon SQS queue for the other team. Grant Amazon S3 permission to", "C. Set up an Amazon SNS topic and configure two A mazon SQS queues to poll the SNS topic.", "D. Create an Amazon SNS topic and configure two A mazon SQS queues to subscribe to the" ], "correct": "D. Create an Amazon SNS topic and configure two A mazon SQS queues to subscribe to the", "explanation": "Explanation/Reference: Explanation The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must firs t add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations wher e you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket. Amazon S3 supports the following destinations where it can publish events: - Amazon Simple Notification Service (Amazon SNS) t opic - Amazon Simple Queue Service (Amazon SQS) queue - AWS Lambda In Amazon SNS, the fanout scenario is when a messag e published to an SNS topic is replicated and pushe d to multiple endpoints, such as Amazon SQS queues, HTTP (S) endpoints, and Lambda functions. This allows for parallel asynchronous processing. For example, you can develop an application that pu blishes a message to an SNS topic whenever an order is placed for a product. Then, SQS queues that are sub scribed to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the pr ocessing or fulfillment of the order. And you can a ttach another Amazon EC2 server instance to a data wareho use for analysis of all orders received. Based on the given scenario, the existing setup sen ds the event notification to an SQS queue. Since yo u need to send the notification to the development and ope rations team, you can use a combination of Amazon SNS and SQS. By using the message fanout pat tern, you can create a topic and use two Amazon SQS queues to subscribe to the topic. If Amazon SNS receives an event notification, it will publish th e message to both subscribers. Take note that Amazon S3 event notifications are de signed to be delivered at least once and to one destination only. You cannot attach two or more SNS topics or SQS queues for S3 event notification. Therefore, you must send the event notification to Amazon SNS. Hence, the correct answer is: Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic. The option that says: Set up another Amazon SQS que ue for the other team. Grant Amazon S3 permission to send a notification to the second SQS queue is incorrect because you can only add 1 SQS or SNS at a time for Amazon S3 events notification. If you need to send the events to multiple subscri bers, you should implement a message fanout pattern with Amaz on SNS and Amazon SQS. The option that says: Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3 permission to send the notification to the second S NS topic is incorrect. Just as mentioned in the previous option, you can only add 1 SQS or SNS at a time for Amazon S3 events notification. In additio n, neither Amazon SNS FIFO topic nor Amazon SQS FIFO q ueue is warranted in this scenario. Both of them can be used together to provide strict message orde ring and message deduplication. The FIFO capabiliti es of each of these services work together to act as a fu lly managed service to integrate distributed applications that require data consistency in near- real-time. The option that says: Set up an Amazon SNS topic an d configure two Amazon SQS queues to poll the SNS topic. Grant Amazon S3 permission to send notif ications to Amazon SNS and update the bucket to use the new SNS topic is incorrect because you c an't poll Amazon SNS. Instead of configuring queues to poll Amazon SNS, you should configure each Amazon SQS qu eue to subscribe to the SNS topic. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/way s-to-add-notification-config-to-bucket.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html#notification-how-to-overview https://docs.aws.amazon.com/sns/latest/dg/welcome.h tml Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", "references": "" }, { "question": ": A company plans to launch an Amazon EC2 instance in a private subnet for its internal corporate web portal. For security purposes, the EC2 instance mus t send data to Amazon DynamoDB and Amazon S3 via private endpoints that don't pass through the publi c Internet. Which of the following can meet the above requireme nts?", "options": [ "A. Use AWS VPN CloudHub to route all access to S3 and DynamoDB via private endpoints.", "B. Use AWS Transit Gateway to route all access to S3 and DynamoDB via private endpoints.", "C. Use AWS Direct Connect to route all access to S3 and DynamoDB via private endpoints.", "D. Use VPC endpoints to route all access to S3 an d DynamoDB via private endpoints." ], "correct": "D. Use VPC endpoints to route all access to S3 an d DynamoDB via private endpoints.", "explanation": "Explanation A VPC endpoint allows you to privately connect your VPC to supported AWS and VPC endpoint services powered by AWS PrivateLink without needing an Inter net gateway, NAT computer, VPN connection, or AWS Direct Connect connection. Instances in your VP C do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not lea ve the Amazon network. In the scenario, you are asked to configure private endpoints to send data to Amazon DynamoDB and Amazon S3 without accessing the public Internet. Am ong the options given, VPC endpoint is the most suitable service that will allow you to use private IP addresses to access both DynamoDB and S3 withou t any exposure to the public internet. Hence, the correct answer is the option that says: Use VPC endpoints to route all access to S3 and DynamoDB via private endpoints. The option that says: Use AWS Transit Gateway to ro ute all access in S3 and DynamoDB to a public endpoint is incorrect because a Transit Gateway sim ply connects your VPC and on-premises networks through a central hub. It acts as a cloud router th at allows you to integrate multiple networks. The option that says: Use AWS Direct Connect to rou te all access to S3 and DynamoDB via private endpoints is incorrect because AWS Direct Connect i s primarily used to establish a dedicated network connection from your premises to AWS. The scenario didn't say that the company is using its on-premise s server or has a hybrid cloud architecture. The option that says: Use AWS VPN CloudHub to route all access in S3 and DynamoDB to a private endpoint is incorrect because AWS VPN CloudHub is m ainly used to provide secure communication between remote sites and not for creating a private endpoint to access Amazon S3 and DynamoDB within the Amazon network. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/vpc-endpoints-dynamodb.html https://docs.aws.amazon.com/glue/latest/dg/vpc-endp oints-s3.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company hosted a web application in an Auto Scali ng group of EC2 instances. The IT manager is concerned about the over-provisioning of the resour ces that can cause higher operating costs. A Soluti ons Architect has been instructed to create a cost-effe ctive solution without affecting the performance of the application. Which dynamic scaling policy should be used to sati sfy this requirement?", "options": [ "A. Use simple scaling.", "B. Use suspend and resume scaling.", "C. Use scheduled scaling.", "D. Use target tracking scaling." ], "correct": "D. Use target tracking scaling.", "explanation": "Explanation An Auto Scaling group contains a collection of Amaz on EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as hea lth check replacements and scaling policies. Both maintaining the number of instances in an Auto Scal ing group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling servic e. The size of an Auto Scaling group depends on the number of instances that you set as the desired cap acity. You can adjust its size to meet demand, eith er manually or by using automatic scaling. Step scaling policies and simple scaling policies a re two of the dynamic scaling options available for you to use. Both require you to create CloudWatch alarms for th e scaling policies. Both require you to specify the high and low thresholds for the alarms. Both require you to define whether to add or remove instances, and how many, or set the group to an exact size. The main differe nce between the policy types is the step adjustments that you get with step scaling policies . When step adjustments are applied, and they incre ase or decrease the current capacity of your Auto Scaling group, the adjustments vary based on the size of th e alarm breach. The primary issue with simple scaling is that after a scaling activity is started, the policy must wai t for the scaling activity or health check replacement to com plete and the cooldown period to expire before responding to additional alarms. Cooldown periods h elp to prevent the initiation of additional scaling activities before the effects of previous activities are visib le. With a target tracking scaling policy, you can incr ease or decrease the current capacity of the group based on a target value for a specific metric. This policy wil l help resolve the over-provisioning of your resour ces. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the targ et value, a target tracking scaling policy also adj usts to changes in the metric due to a changing load patter n. Hence, the correct answer is: Use target tracking s caling. The option that says: Use simple scaling is incorre ct because you need to wait for the cooldown period to complete before initiating additional scaling activ ities. Target tracking or step scaling policies can trigger a scaling activity immediately without waiting for th e cooldown period to expire. The option that says: Use scheduled scaling is inco rrect because this policy is mainly used for predic table traffic patterns. You need to use the target tracking scali ng policy to optimize the cost of your infrastructu re without affecting the performance. The option that says: Use suspend and resume scalin g is incorrect because this type is used to tempora rily pause scaling activities triggered by your scaling policies and scheduled actions. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scaling-target-tracking.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroup.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A company needs to design an online analytics appli cation that uses Redshift Cluster for its data ware house. Which of the following services allows them to moni tor all API calls in Redshift instance and can also provide secured data for auditing and compliance purposes?", "options": [ "A. AWS CloudTrail", "B. Amazon CloudWatch", "C. AWS X-Ray", "D. Amazon Redshift Spectrum" ], "correct": "A. AWS CloudTrail", "explanation": "Explanation AWS CloudTrail is a service that enables governance , compliance, operational auditing, and risk auditi ng of your AWS account. With CloudTrail, you can log, con tinuously monitor, and retain account activity related to actions across your AWS infrastructure. By default, CloudTrail is enabled on your AWS accou nt when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily v iew recent events in the CloudTrail console by going to Event history. CloudTrail provides event history of your AWS accou nt activity, including actions taken through the AW S Management Console, AWS SDKs, command line tools, A PI calls, and other AWS services. This event history simplifies security analysis, resource chan ge tracking, and troubleshooting. Hence, the correct answer is: AWS CloudTrail. Amazon CloudWatch is incorrect. Although this is al so a monitoring service, it cannot track the API ca lls to your AWS resources. AWS X-Ray is incorrect because this is not a suitab le service to use to track each API call to your AW S resources. It just helps you debug and analyze your microservices applications with request tracing so you can find the root cause of issues and performance. Amazon Redshift Spectrum is incorrect because this is not a monitoring service but rather a feature of Amazon Redshift that enables you to query and analyze all of your data in Amazon S3 using the open data forma ts you already use, with no data loading or transformation s needed. References: https://aws.amazon.com/cloudtrail/ https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/cloudtrail-user-guide.html Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", "references": "" }, { "question": ": A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to crea te a solution that will allow users around the glob e to access the data using an API. What should the Solutions Architect do meet the abo ve requirement?", "options": [ "A. Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service", "B. Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto", "C. Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2", "D. Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of" ], "correct": "D. Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of", "explanation": "Explanation AWS Lambda lets you run code without provisioning o r managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend servi ce - all with zero administration. Just upload your co de, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from oth er AWS services or call it directly from any web or mo bile app. The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the funct ion returns a response, it stays active and waits t o process additional events. If you invoke the functi on again while the first event is being processed, Lambda initializes another instance, and the function proc esses the two events concurrently. As more events c ome in, Lambda routes them to available instances and c reates new instances as needed. When the number of requests decreases, Lambda stops unused instances t o free up the scaling capacity for other functions. Your functions' concurrency is the number of instan ces that serve requests at a given time. For an ini tial burst of traffic, your functions' cumulative concurrency in a Region can reach an initial level of between 5 00 and 3000, which varies per Region. Based on the given scenario, you need to create a s olution that will satisfy the two requirements. The first requirement is to create a solution that will allow the users to access the data using an API. To impl ement this solution, you can use Amazon API Gateway. The secon d requirement is to handle the burst of traffic wit hin seconds. You should use AWS Lambda in this scenario because Lambda functions can absorb reasonable bursts of traffic for approximately 15-3 0 minutes. Lambda can scale faster than the regular Auto Scali ng feature of Amazon EC2, Amazon Elastic Beanstalk, or Amazon ECS. This is because AWS Lambda is more ligh tweight than other computing services. Under the hood, Lambda can run your code to thousands of available AWS-managed EC2 instances (that could already be running) within seconds to accommodate t raffic. This is faster than the Auto Scaling proces s of launching new EC2 instances that could take a few m inutes or so. An alternative is to overprovision yo ur compute capacity but that will incur significant co sts. The best option to implement given the require ments is a combination of AWS Lambda and Amazon API Gateway. Hence, the correct answer is: Create an API using A mazon API Gateway and use AWS Lambda to handle the bursts of traffic. The option that says: Create an API using Amazon AP I Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of t raffic in seconds is incorrect. AWS Lambda is a better option than Amazon ECS since it can handle a sudden burst of traffic within seconds and not min utes. The option that says: Create an API using Amazon AP I Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic i n seconds is incorrect because just like the previo us option, the use of Auto Scaling has a delay of a few minutes as it lau nches new EC2 instances that will be used by Amazon Elastic Beanstalk. The option that says: Create an API using Amazon AP I Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffi c in seconds is incorrect because the processing time of Amazon EC2 Auto Scaling to provision new re sources takes minutes. Take note that in the scenar io, a burst of traffic within seconds is expected to happ en. References: https://aws.amazon.com/blogs/startups/from-0-to-100 -k-in-seconds-instant-scale-with-aws-lambda/ https://docs.aws.amazon.com/lambda/latest/dg/invoca tion-scaling.html Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", "references": "" }, { "question": ": A company has a cloud architecture that is composed of Linux and Windows EC2 instances that process high volumes of financial data 24 hours a day, 7 da ys a week. To ensure high availability of the syste ms, the Solutions Architect needs to create a solution that allows them to monitor the memory and disk utilization metrics of all the instances. Which of the following is the most suitable monitor ing solution to implement?", "options": [ "A. Enable the Enhanced Monitoring option in EC2 a nd install CloudWatch agent to all the", "B. Use Amazon Inspector and install the Inspector agent to all EC2 instances.", "C. Install the CloudWatch agent to all the EC2 in stances that gathers the memory and disk", "D. Use the default CloudWatch configuration to EC 2 instances where the memory and disk" ], "correct": "", "explanation": "Explanation Amazon CloudWatch has available Amazon EC2 Metrics for you to use for monitoring CPU utilization, Network utilization, Disk performance, and Disk Rea ds/Writes. In case you need to monitor the below items, you need to prepare a custom metric using a Perl or other shell script, as there are no ready t o use metrics for: Memory utilization Disk swap utilization Disk space utilization Page file utilization Log collection Take note that there is a multi-platform CloudWatch agent which can be installed on both Linux and Windows-based instances. You can use a single agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. This agent supports both Windows Server and Linux and enables you to select the metrics to be collected, including sub-resource metrics such as per-CPU core . It is recommended that you use the new agent instead of t he older monitoring scripts to collect metrics and logs. Hence, the correct answer is: Install the CloudWatc h agent to all the EC2 instances that gathers the memory and disk utilization data. View the custom m etrics in the Amazon CloudWatch console. The option that says: Use the default CloudWatch co nfiguration to EC2 instances where the memory and disk utilization metrics are already available. Install the AWS Systems Manager (SSM) Agent to all the EC2 instances is incorrect because, by defa ult, CloudWatch does not automatically provide memory and disk utilization metrics of your instanc es. You have to set up custom CloudWatch metrics to monitor the memory, disk swap, disk space, and page file utilization of your instances. The option that says: Enable the Enhanced Monitorin g option in EC2 and install CloudWatch agent to all the EC2 instances to be able to view the memory and disk utilization in the CloudWatch dashboard is incorrect because Enhanced Monitoring is a feature of Amazon RDS. By default, Enhanced Monitoring metrics are stored for 30 days in the Cl oudWatch Logs. The option that says: Use Amazon Inspector and inst all the Inspector agent to all EC2 instances is incorrect because Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. It does not provide a custom metric to track the memor y and disk utilization of each and every EC2 instan ce in your VPC. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html#using_put_script Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ CloudWatch Agent vs SSM Agent vs Custom Daemon Scri pts: https://tutorialsdojo.com/cloudwatch-agent-vs-ssm-a gent-vs-custom-daemon-scripts/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": A company is in the process of migrating their appl ications to AWS. One of their systems requires a database that can scale globally and handle frequen t schema changes. The application should not have a ny downtime or performance issues whenever there is a schema change in the database. It should also provi de a low latency response to high-traffic queries. Which is the most suitable database solution to use to achieve this requirement?", "options": [ "A. Redshift", "B. Amazon DynamoDB", "C. An Amazon RDS instance in Multi-AZ Deployments configuration", "D. An Amazon Aurora database with Read Replicas" ], "correct": "B. Amazon DynamoDB", "explanation": "Explanation Before we proceed in answering this question, we mu st first be clear with the actual definition of a \"schema\". Basically, the english definition of a sc hema is: a representation of a plan or theory in th e form of an outline or model. Just think of a schema as the \"structure\" or a \"mod el\" of your data in your database. Since the scenar io requires that the schema, or the structure of your data, changes frequently, then you have to pick a d atabase which provides a non-rigid and flexible way of addi ng or removing new types of data. This is a classic example of choosing between a relational database and non-r elational (NoSQL) database. A relational database is known for having a rigid s chema, with a lot of constraints and limits as to w hich (and what type of ) data can be inserted or not. It is p rimarily used for scenarios where you have to suppo rt complex queries which fetch data across a number of tables. It is best for scenarios where you have complex table relationships but for use cases where you need to have a flexible schema, this is not a suitable database to use. For NoSQL, it is not as rigid as a relational datab ase because you can easily add or remove rows or elements in your table/collection entry. It also ha s a more flexible schema because it can store compl ex hierarchical data within a single item which, unlik e a relational database, does not entail changing m ultiple related tables. Hence, the best answer to be used h ere is a NoSQL database, like DynamoDB. When your business requires a low-latency response to high-tr affic queries, taking advantage of a NoSQL system generally makes technical and economic sense. Amazon DynamoDB helps solve the problems that limit the relational system scalability by avoiding them . In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data stru ctures are tailored to the specific requirements of your business use cases. Remember that a relational database system does not scale well for the following reasons: - It normalizes data and stores it on multiple tabl es that require multiple queries to write to disk. - It generally incurs the performance costs of an A CID-compliant transaction system. - It uses expensive joins to reassemble required vi ews of query results. For DynamoDB, it scales well due to these reasons: - Its schema flexibility lets DynamoDB store comple x hierarchical data within a single item. DynamoDB is not a totally schemaless database since the very definition of a schema is just the model or struct ure of your data. - Composite key design lets it store related items close together on the same table. An Amazon RDS instance in Multi-AZ Deployments conf iguration and an Amazon Aurora database with Read Replicas are incorrect because both of th em are a type of relational database. Redshift is incorrect because it is primarily used for OLAP systems. References: https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-general-nosql-design.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/bp-relational-modeling.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/SQLtoNoSQL.html Also check the AWS Certified Solutions Architect Of ficial Study Guide: Associate Exam 1st Edition and turn to page 161 which talks about NoSQL Databases. Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company is using a combination of API Gateway and Lambda for the web services of the online web portal that is being accessed by hundreds of thousa nds of clients each day. They will be announcing a new revolutionary product and it is expected that the w eb portal will receive a massive number of visitors all around the globe. How can you protect the backend systems and applica tions from traffic spikes?", "options": [ "A. Use throttling limits in API Gateway", "B. API Gateway will automatically scale and handl e massive traffic spikes so you do not have", "C. Manually upgrade the EC2 instances being used by API Gateway", "D. Deploy Multi-AZ in API Gateway with Read Repli ca" ], "correct": "A. Use throttling limits in API Gateway", "explanation": "Explanation Amazon API Gateway provides throttling at multiple levels including global and by a service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate lim it of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per sec ond for a few seconds. Amazon API Gateway tracks the number of requests pe r second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response. Hence, the correct answer is: Use throttling limits in API Gateway. The option that says: API Gateway will automaticall y scale and handle massive traffic spikes so you do not have to do anything is incorrect. Although it can s cale using AWS Edge locations, you still need to co nfigure the throttling to further manage the bursts of your API s. Manually upgrading the EC2 instances being used by API Gateway is incorrect because API Gateway is a fully managed service and hence, you do not ha ve access to its underlying resources. Deploying Multi-AZ in API Gateway with Read Replica is incorrect because RDS has Multi-AZ and Read Replica capabilities, and not API Gateway.", "references": "https://aws.amazon.com/api-gateway/faqs/#Throttling _and_Caching Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/" }, { "question": ": A company is designing a banking portal that uses A mazon ElastiCache for Redis as its distributed sess ion management component. Since the other Cloud Enginee rs in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a pa ssword before they are granted permission to execute Redis commands. As the Solutions Architect, which of the following should you do to meet the above requirement?", "options": [ "A. Authenticate the users using Redis AUTH by cre ating a new Redis Cluster with both the --", "B. Set up a Redis replication group and enable th e AtRestEncryptionEnabled parameter.", "C. Set up an IAM Policy and MFA which requires th e Cloud Engineers to enter their IAM", "D. Enable the in-transit encryption for Redis rep lication groups." ], "correct": "A. Authenticate the users using Redis AUTH by cre ating a new Redis Cluster with both the --", "explanation": "Explanation Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis comman ds on a password-protected Redis server. Hence, the correct answer is: Authenticate the users using Red is AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-to ken parameters enabled. To require that users enter a password on a passwor d-protected Redis server, include the parameter --a uth- token with the correct password when you create you r replication group or cluster and on all subsequen t commands to the replication group or cluster. Setting up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the El astiCache cluster is incorrect because this is not possible in IAM. You have to use the Redis AUTH opt ion instead. Setting up a Redis replication group and enabling t he AtRestEncryptionEnabled parameter is incorrect because the Redis At-Rest Encryption feat ure only secures the data inside the in-memory data store. You have to use Redis AUTH option instead. Enabling the in-transit encryption for Redis replic ation groups is incorrect. Although in-transit encryption is part of the solution, it is missing t he most important thing which is the Redis AUTH opt ion. References: https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/auth.html https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/encryption.html Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis (cluster mode enabled vs disabled) vs Memcach ed: https://tutorialsdojo.com/redis-cluster-mode-enable d-vs-disabled-vs-memcached/", "references": "" }, { "question": ": A company plans to host a web application in an Aut o Scaling group of Amazon EC2 instances. The application will be used globally by users to uploa d and store several types of files. Based on user t rends, files that are older than 2 years must be stored in a dif ferent storage class. The Solutions Architect of th e company needs to create a cost-effective and scalable solut ion to store the old files yet still provide durabi lity and high availability. Which of the following approach can be used to fulf ill this requirement? (Select TWO.)", "options": [ "A. Use Amazon EBS volumes to store the files. Con figure the Amazon Data Lifecycle Manager", "B. Use Amazon S3 and create a lifecycle policy th at will move the objects to Amazon S3", "C. Use a RAID 0 storage configuration that stripe s multiple Amazon EBS volumes together to", "D. Use Amazon S3 and create a lifecycle policy th at will move the objects to Amazon S3" ], "correct": "", "explanation": "Explanation Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file. To store a file in Amazon S3, y ou upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metad ata. Buckets are containers for objects. You can ha ve one or more buckets. You can control access for each bu cket, deciding who can create, delete, and list obj ects in it. You can also choose the geographical region where A mazon S3 will store the bucket and its contents and view access logs for the bucket and its objects. To move a file to a different storage class, you ca n use Amazon S3 or Amazon EFS. Both services have lifecycle configurations. Take note that Amazon EFS can only transition a file to the IA storage class after 90 days. Since you need to move the files that are old er than 2 years to a more cost-effective and scalab le solution, you should use the Amazon S3 lifecycle co nfiguration. With S3 lifecycle rules, you can trans ition files to S3 Standard IA or S3 Glacier. Using S3 Glacier e xpedited retrieval, you can quickly access your fil es within 1-5 minutes. Hence, the correct answers are: - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years. - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA after 2 years. The option that says: Use Amazon EFS and create a l ifecycle policy that will move the objects to Amazon EFS-IA after 2 years is incorrect because th e maximum days for the EFS lifecycle policy is only 90 days. The requirement is to move the files that are older than 2 years or 730 days. The option that says: Use Amazon EBS volumes to sto re the files. Configure the Amazon Data Lifecycle Manager (DLM) to schedule snapshots of th e volumes after 2 years is incorrect because Amazon EBS costs more and is not as scalable as Ama zon S3. It has some limitations when accessed by multiple EC2 instances. There are also huge costs i nvolved in using the multi-attach feature on a Provisioned IOPS EBS volume to allow multiple EC2 i nstances to access the volume. The option that says: Use a RAID 0 storage configur ation that stripes multiple Amazon EBS volumes together to store the files. Configure the Amazon D ata Lifecycle Manager (DLM) to schedule snapshots of the volumes after 2 years is incorrect because RAID (Redundant Array of Independent Disks) is just a data storage virtualization techno logy that combines multiple storage devices to achi eve higher performance or data durability. RAID 0 can stripe m ultiple volumes together for greater I/O performance than you can achieve with a single volu me. On the other hand, RAID 1 can mirror two volumes together to achieve on-instance redundancy. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/efs/latest/ug/lifecycle -management-efs.html https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": An online medical system hosted in AWS stores sensi tive Personally Identifiable Information (PII) of t he users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regula tory requirements of the company. Which S3 encryption technique should the Architect use?", "options": [ "A. Use S3 client-side encryption with a client-si de master key.", "B. Use S3 client-side encryption with a KMS-manag ed customer master key.", "C. Use S3 server-side encryption with a KMS manag ed key.", "D. Use S3 server-side encryption with customer pr ovided key." ], "correct": "A. Use S3 client-side encryption with a client-si de master key.", "explanation": "Explanation Client-side encryption is the act of encrypting dat a before sending it to Amazon S3. To enable client- side encryption, you have the following options: - Use an AWS KMS-managed customer master key. - Use a client-side master key. When using an AWS KMS-managed customer master key t o enable client-side data encryption, you provide an AWS KMS customer master key ID (CMK ID) to AWS. On the other hand, when you use client-side master key for client-side data encrypt ion, your client-side master keys and your unencryp ted data are never sent to AWS. It's important that you safely manage your encryption keys because if you lose them, you can't decrypt your data. This is how client-side encryption using client-sid e master key works: When uploading an object - You provide a client-sid e master key to the Amazon S3 encryption client. The client uses the master key only to encrypt the data encryption key that it generates randomly. The process works like this: 1. The Amazon S3 encryption client generates a one- time-use symmetric key (also known as a data encryption key or data key) locally. It uses the da ta key to encrypt the data of a single Amazon S3 ob ject. The client generates a separate data key for each o bject. 2. The client encrypts the data encryption key usin g the master key that you provide. The client uploa ds the encrypted data key and its material description as part of the object metadata. The client uses the ma terial description to determine which client-side master k ey to use for decryption. 3. The client uploads the encrypted data to Amazon S3 and saves the encrypted data key as object metad ata (x-amz-meta-x-amz-key) in Amazon S3. When downloading an object - The client downloads t he encrypted object from Amazon S3. Using the material description from the object's metadata, th e client determines which master key to use to decr ypt the data key. The client uses that master key to de crypt the data key and then uses the data key to de crypt the object. Hence, the correct answer is to use S3 client-side encryption with a client-side master key. Using S3 client-side encryption with a KMS-managed customer master key is incorrect because in client-side encryption with a KMS-managed customer master key, you provide an AWS KMS customer master key ID (CMK ID) to AWS. The scenario clearly indicates that both the master keys and the unencrypted data should never be sent to AWS. Using S3 server-side encryption with a KMS managed key is incorrect because the scenario mentioned that the unencrypted data should never be sent to A WS, which means that you have to use client-side encryption in order to encrypt the data first befor e sending to AWS. In this way, you can ensure that there is no unencrypted data being uploaded to AWS. In ad dition, the master key used by Server-Side Encrypti on with AWS KMSManaged Keys (SSE-KMS) is uploaded and managed by AWS, which directly violates the requirement of not uploading the master key. Using S3 server-side encryption with customer provi ded key is incorrect because just as mentioned above, you have to use client-side encryption in th is scenario instead of server-side encryption. For the S3 server-side encryption with customer-provided key ( SSE-C), you actually provide the encryption key as part of your request to upload the object to S3. Us ing this key, Amazon S3 manages both the encryption (as it writes to disks) and decryption (when you access yo ur objects). References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html", "references": "" }, { "question": ": An application consists of multiple EC2 instances i n private subnets in different availability zones. The application uses a single NAT Gateway for downloadi ng software patches from the Internet to the instan ces. There is a requirement to protect the application from a singl e point of failure when the NAT Gateway encounters a failure or if its availability zone goes down. How should the Solutions Architect redesign the arc hitecture to be more highly available and cost-effe ctive", "options": [ "A. Create three NAT Gateways in each availability zone. Configure the route table in each", "B. Create a NAT Gateway in each availability zone . Configure the route table in each private", "C. Create two NAT Gateways in each availability z one. Configure the route table in each", "D. Create a NAT Gateway in each availability zone . Configure the route table in each public" ], "correct": "B. Create a NAT Gateway in each availability zone . Configure the route table in each private", "explanation": "Explanation A NAT Gateway is a highly available, managed Networ k Address Translation (NAT) service for your resources in a private subnet to access the Interne t. NAT gateway is created in a specific Availabilit y Zone and implemented with redundancy in that zone. You must create a NAT gateway on a public subnet to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Int ernet from initiating a connection with those insta nces. If you have resources in multiple Availability Zone s and they share one NAT gateway, and if the NAT gateway's Availability Zone is down, resources in t he other Availability Zones lose Internet access. T o create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configu re your routing to ensure that resources use the NAT gatewa y in the same Availability Zone. Hence, the correct answer is: Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instanc es use the NAT Gateway in the same availability zone. The option that says: Create a NAT Gateway in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone is incorrect because you should configure the route ta ble in the private subnet and not the public subnet to associate the right instances in the private subnet . The options that say: Create two NAT Gateways in ea ch availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone and Create three NAT Gateways in each availability zone . Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone are both incorrect because a single NAT Gateway in each availability z one is enough. NAT Gateway is already redundant in nature, meaning, AWS already handles any failures t hat occur in your NAT Gateway in an availability zo ne. References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-gateway.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-comparison.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A tech company has a CRM application hosted on an A uto Scaling group of On-Demand EC2 instances. The application is extensively used during office h ours from 9 in the morning till 5 in the afternoon. Their users are complaining that the performance of the applica tion is slow during the start of the day but then w orks normally after a couple of hours. Which of the following can be done to ensure that t he application works properly at the beginning of t he day?", "options": [ "A. Configure a Dynamic scaling policy for the Aut o Scaling group to launch new instances", "B. Set up an Application Load Balancer (ALB) to y our architecture to ensure that the traffic is", "C. Configure a Scheduled scaling policy for the A uto Scaling group to launch new instances", "D. Configure a Dynamic scaling policy for the Aut o Scaling group to launch new instances" ], "correct": "C. Configure a Scheduled scaling policy for the A uto Scaling group to launch new instances", "explanation": "Explanation Scaling based on a schedule allows you to scale you r application in response to predictable load chang es. For example, every week the traffic to your web applica tion starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predicta ble traffic patterns of your web application. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to p erform a scaling action at specified times. To crea te a scheduled scaling action, you specify the start tim e when the scaling action should take effect, and t he new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minim um, maximum, and desired size specified by the scaling action. You can create scheduled actions fo r scaling one time only or for scaling on a recurri ng schedule. Hence, configuring a Scheduled scaling policy for t he Auto Scaling group to launch new instances before the start of the day is the correct answer. You need to configure a Scheduled scaling policy. T his will ensure that the instances are already scaled up and ready before the start of the day since this is wh en the application is used the most. Configuring a Dynamic scaling policy for the Auto S caling group to launch new instances based on the CPU utilization and configuring a Dynamic scali ng policy for the Auto Scaling group to launch new instances based on the Memory utilization are b oth incorrect because although these are valid solutions, it is still better to configure a Schedu led scaling policy as you already know the exact pe ak hours of your application. By the time either the CPU or Mem ory hits a peak, the application already has performance issues, so you need to ensure the scali ng is done beforehand using a Scheduled scaling pol icy. Setting up an Application Load Balancer (ALB) to yo ur architecture to ensure that the traffic is properly distributed on the instances is incorrect. Although the Application load balancer can also balance the traffic, it cannot increase the instanc es based on demand.", "references": "https://docs.aws.amazon.com/autoscaling/ec2/usergui de/schedule_time.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/" }, { "question": ": A company collects atmospheric data such as tempera ture, air pressure, and humidity from different countries. Each site location is equipped with vari ous weather instruments and a high-speed Internet connection. The average collected data in each loca tion is around 500 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. As the Solutions Architect, you need to aggregate all the data in the fastest way. Which of the following options can satisfy the give n requirement?", "options": [ "A. Set up a Site-to-Site VPN connection.", "B. Enable Transfer Acceleration in the destinatio n bucket and upload the collected data using", "C. Upload the data to the closest S3 bucket. Set up a cross-region replication and copy the", "D. Use AWS Snowball Edge to transfer large amount s of data." ], "correct": "B. Enable Transfer Acceleration in the destinatio n bucket and upload the collected data using", "explanation": "Explanation Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers industry- leading durability, availability, performance, secu rity, and virtually unlimited scalability at very low costs. Amazon S3 is also designed to be highly flexible. S tore any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP app lication or a sophisticated web application. Since the weather forecasting application is locate d in N.Virginia, you need to transfer all the data in the same AWS Region. With Amazon S3 Transfer Accelerati on, you can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Multipart upload allows you to upl oad a single object as a set of parts. After all the pa rts of your object are uploaded, Amazon S3 then pre sents the data as a single object. This approach is the faste st way to aggregate all the data. Hence, the correct answer is: Enable Transfer Accel eration in the destination bucket and upload the collected data using Multipart Upload. The option that says: Upload the data to the closes t S3 bucket. Set up a cross-region replication and copy the objects to the destination bucket is incor rect because replicating the objects to the destina tion bucket takes about 15 minutes. Take note that the r equirement in the scenario is to aggregate the data in the fastest way. The option that says: Use AWS Snowball Edge to tran sfer large amounts of data is incorrect because the end-to-end time to transfer up to 80 TB of data into AWS Snowball Edge is approximately one week. The option that says: Set up a Site-to-Site VPN con nection is incorrect because setting up a VPN connection is not needed in this scenario. Site-to- Site VPN is just used for establishing secure conne ctions between an on-premises network and Amazon VPC. Also , this approach is not the fastest way to transfer your data. You must use Amazon S3 Transfer Accelera tion. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/rep lication.html https://docs.aws.amazon.com/AmazonS3/latest/dev/tra nsfer-acceleration.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company plans to build a data analytics applicati on in AWS which will be deployed in an Auto Scaling group of On-Demand EC2 instances and a MongoDB data base. It is expected that the database will have high-throughput workloads performing small, random I/O operations. As the Solutions Architect, you are required to properly set up and launch the required resources in AWS. Which of the following is the most suitable EBS typ e to use for your database?", "options": [ "A. General Purpose SSD (gp2)", "B. Cold HDD (sc1)", "C. Throughput Optimized HDD (st1)", "D. Provisioned IOPS SSD (io1)" ], "correct": "D. Provisioned IOPS SSD (io1)", "explanation": "Explanation On a given volume configuration, certain I/O charac teristics drive the performance behavior for your E BS volumes. SSD-backed volumes, such as General Purpos e SSD (gp2) and Provisioned IOPS SSD (io1), deliver consistent performance whether an I/O opera tion is random or sequential. HDD-backed volumes like Throughput Optimized HDD (st1) and Cold HDD (sc1) deliver opti mal performance only when I/O operations are large and sequential. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type which has small, random I/O operations or large, sequential I/O operations. Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Unlike gp2 , which uses a bucket and credit model to calculate p erformance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, an d Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the ti me over a given year. General Purpose SSD (gp2) is incorrect because alth ough General Purpose is a type of SSD that can handle small, random I/O operations, the Provisione d IOPS SSD volumes are much more suitable to meet the needs of I/O-intensive database workloads such as MongoDB, Oracle, MySQL, and many others. Throughput Optimized HDD (st1) and Cold HDD (sc1) a re incorrect because HDD volumes (such as Throughput Optimized HDD and Cold HDD volumes) are more suitable for workloads with large, sequential I/O operations instead of small, random I/O operations. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_piop s https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-io-characteristics.html Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A global IT company with offices around the world h as multiple AWS accounts. To improve efficiency and drive costs down, the Chief Information Officer (CIO) wan ts to set up a solution that centrally manages thei r AWS resources. This will allow them to procure AWS reso urces centrally and share resources such as AWS Tra nsit Gateways, AWS License Manager configurations, or Am azon Route 53 Resolver rules across their various accounts. As the Solutions Architect, which combination of op tions should you implement in this scenario? (Selec t TWO.)", "options": [ "A. Use the AWS Identity and Access Management ser vice to set up cross-account access that", "B. Consolidate all of the company's accounts usin g AWS ParallelCluster.", "C. Use the AWS Resource Access Manager (RAM) serv ice to easily and securely share your", "D. Use AWS Control Tower to easily and securely s hare your resources with your AWS" ], "correct": "", "explanation": "Explanation AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS O rganization. You can share AWS Transit Gateways, Subnets, AWS License Manager configuratio ns, and Amazon Route 53 Resolver rules resources with RAM. Many organizations use multiple accounts to create administrative or billing isolation, and limit the impact of errors. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own. You can create resources centrally in a multi-account environment, and use RAM to share those resources across accoun ts in three simple steps: create a Resource Share, specif y resources, and specify accounts. RAM is available to you at no additional charge. You can procure AWS resources centrally, and use RA M to share resources such as subnets or License Manager configurations with other accounts. This el iminates the need to provision duplicate resources in every account in a multi-account environment, reducing th e operational overhead of managing those resources in every account. AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage. With Organizations, you can create member accounts and invite existing accounts to join your organization. You can organize those accounts into groups and attach policy-based controls. Hence, the correct combination of options in this s cenario is: - Consolidate all of the company's accounts using A WS Organizations. - Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. The option that says: Use the AWS Identity and Acce ss Management service to set up cross-account access that will easily and securely share your res ources with your AWS accounts is incorrect because although you can delegate access to resources that are in different AWS accounts using IAM, this proce ss is extremely tedious and entails a lot of operational overhead since you have to manually set up cross- a ccount access to each and every AWS account of the company . A better solution is to use AWS Resources Access Manager instead. The option that says: Use AWS Control Tower to easi ly and securely share your resources with your AWS accounts is incorrect because AWS Control Tower simply offers the easiest way to set up and govern a new, secure, multi-account AWS environment. This is not the most suitable service to use to securely s hare your resources across AWS accounts or within your O rganization. You have to use AWS Resources Access Manager (RAM) instead. The option that says: Consolidate all of the compan y's accounts using AWS ParallelCluster is incorrect because AWS ParallelCluster is simply an AWS-suppor ted open-source cluster management tool that makes it easy for you to deploy and manage High-Per formance Computing (HPC) clusters on AWS. In this particular scenario, it is more appropriate to use AWS Organizations to consolidate all of your AWS accounts. References: https://aws.amazon.com/ram/ https://docs.aws.amazon.com/ram/latest/userguide/sh areable.html", "references": "" }, { "question": ": A tech company that you are working for has underta ken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring mo re storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for the storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates a sing le sign-on feature from your corporate AD or LDAP dire ctory and also restricts access for each individual user to a designated user folder in an S3 bucket? ( Select TWO.)", "options": [ "A. Set up a matching IAM user for each of the 120 0 users in your corporate directory that", "B. Configure an IAM role and an IAM Policy to acc ess the bucket.", "C. Use 3rd party Single Sign-On solutions such as Atlassian Crowd, OKTA, OneLogin and", "D. Map each individual user to a designated user folder in S3 using Amazon WorkDocs to" ], "correct": "", "explanation": "Explanation The question refers to one of the common scenarios for temporary credentials in AWS. Temporary credentials are useful in scenarios that involve id entity federation, delegation, cross-account access , and IAM roles. In this example, it is called enterprise identity federation considering that you also need to set up a single sign-on (SSO) capability. The correct answers are: - Setup a Federation proxy or an Identity provider - Setup an AWS Security Token Service to generate t emporary tokens - Configure an IAM role and an IAM Policy to access the bucket. In an enterprise identity federation, you can authe nticate users in your organization's network, and t hen provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. Thi s is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards l ike Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leve rage your Microsoft Active Directory. You can also use SAML 2.0 to manage your own solution for federa ting user identities. Using 3rd party Single Sign-On solutions such as At lassian Crowd, OKTA, OneLogin and many others is incorrect since you don't have to use 3rd party solutions to provide the access. AWS already provides the necessary tools that you can use in th is situation. Mapping each individual user to a designated user f older in S3 using Amazon WorkDocs to access their personal documents is incorrect as there is n o direct way of integrating Amazon S3 with Amazon WorkDocs for this particular scenario. Amazon WorkD ocs is simply a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you ca n easily create, edit, and share content. And becau se it's stored centrally on AWS, you can access it fro m anywhere on any device. Setting up a matching IAM user for each of the 1200 users in your corporate directory that needs access to a folder in the S3 bucket is incorrect si nce creating that many IAM users would be unnecessa ry. Also, you want the account to integrate with your A D or LDAP directory, hence, IAM Users does not fit these criteria. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers_saml.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_providers_oidc.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/iam-s3-user-specific-folder/ AWS Identity Services Overview: https://youtu.be/AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": There are a lot of outages in the Availability Zone of your RDS database instance to the point that you have lost access to the database. What could yo u do to prevent losing access to your database in case that this event happens again?", "options": [ "A. Make a snapshot of the database", "B. Increase the database instance size", "C. Create a read replica", "D. Enabled Multi-AZ failover" ], "correct": "D. Enabled Multi-AZ failover", "explanation": "Explanation Amazon RDS Multi-AZ deployments provide enhanced av ailability and durability for Database (DB) Instances, making them a natural fit for production database workloads. For this scenario, enabling Mu lti- AZ failover is the correct answer. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and syn chronously replicates the data to a standby instanc e in a different Availability Zone (AZ). Each AZ runs on i ts own physically distinct, independent infrastruct ure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS pe rforms an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Making a snapshot of the database allows you to hav e a backup of your database, but it does not provid e immediate availability in case of AZ failure. So th is is incorrect. Increasing the database instance size is not a solu tion for this problem. Doing this action addresses the need to upgrade your compute capacity but does not solve th e requirement of providing access to your database even in the event of a loss of one of the Availability Zones. Creating a read replica is incorrect because this s imply provides enhanced performance for read-heavy database workloads. Although you can promote a read replica, its asynchronous replication might not provide you the latest version of your database.", "references": "https://aws.amazon.com/rds/details/multi-az/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" }, { "question": ": A cryptocurrency trading platform is using an API b uilt in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price sur ge of Bitcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend s ystems of the platform from traffic spikes?", "options": [ "A. Move the Lambda function in a VPC.", "B. Enable throttling limits and result caching in API Gateway.", "C. Use CloudFront in front of the API Gateway to act as a cache.", "D. Switch from using AWS Lambda and API Gateway t o a more scalable and highly available" ], "correct": "B. Enable throttling limits and result caching in API Gateway.", "explanation": "Explanation Amazon API Gateway provides throttling at multiple levels including global and by service call. Thrott ling limits can be set for standard rates and bursts. Fo r example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a f ew seconds. Amazon API Gateway tracks the number of requests per second. Any request over the limit wil l receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automat ically when met with this response. Hence, enabling throttling limits and result caching in AP I Gateway is the correct answer. You can add caching to API calls by provisioning an Amazon API Gateway cache and specifying its size i n gigabytes. The cache is provisioned for a specific stage of your APIs. This improves performance and reduces the traffic sent to your back end. Cache se ttings allow you to control the way the cache key i s built and the time-to-live (TTL) of the data stored for e ach method. Amazon API Gateway also exposes management APIs that help you invalidate the cache for each stage. The option that says: Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling is incorrect since there is n o need to transfer your applications to other services. Using CloudFront in front of the API Gateway to act as a cache is incorrect because CloudFront only speeds up content delivery which provides a better latency experience for your users. It does not help much for the backend. Moving the Lambda function in a VPC is incorrect be cause this answer is irrelevant to what is being asked. A VPC is your own virtual private cloud wher e you can launch AWS services.", "references": "https://aws.amazon.com/api-gateway/faqs/ Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/ Here is an in-depth tutorial on Amazon API Gateway: https://youtu.be/XwfpPEFHKtQ" }, { "question": ": A content management system (CMS) is hosted on a fl eet of auto-scaled, On-Demand EC2 instances that use Amazon Aurora as its database. Currently, the s ystem stores the file documents that the users uplo ad in one of the attached EBS Volumes. Your manager notic ed that the system performance is quite slow and he has instructed you to improve the architecture of the s ystem. In this scenario, what will you do to implement a s calable, high-available POSIX-compliant shared file system?", "options": [ "A. Create an S3 bucket and use this as the storag e for the CMS", "B. Upgrading your existing EBS volumes to Provisi oned IOPS SSD Volumes", "C. Use ElastiCache", "D. Use EFS" ], "correct": "D. Use EFS", "explanation": "Explanation Amazon Elastic File System (Amazon EFS) provides si mple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system int erface and file system access semantics, allowing y ou to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at t he same time, allowing Amazon EFS to provide a common data source for workloads and applications r unning on more than one Amazon EC2 instance. This particular scenario tests your understanding o f EBS, EFS, and S3. In this scenario, there is a fl eet of On-Demand EC2 instances that store file documents f rom the users to one of the attached EBS Volumes. The system performance is quite slow because the architecture doesn't provide the EC2 instances parallel shared a ccess to the file documents. Although an EBS Volume can be attached to multiple EC2 instances, you can only do so on instances within an availability zone. What we need is high-a vailable storage that can span multiple availabilit y zones. Take note as well that the type of storage n eeded here is \"file storage\" which means that S3 is not the best service to use because it is mainly used for \" object storage\", and S3 does not provide the notion of \"folders\" too. This is why using EFS is the correct answer. Upgrading your existing EBS volumes to Provisioned IOPS SSD Volumes is incorrect because an EBS volume is a storage area network (SAN) storage and not a POSIX-compliant shared file system. You have to use EFS instead. Using ElastiCache is incorrect because this is an i n-memory data store that improves the performance o f your applications, which is not what you need since it i s not a file storage.", "references": "https://aws.amazon.com/efs/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Check out this Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/" }, { "question": ": A company has a hybrid cloud architecture that conn ects their on-premises data center and cloud infrastructure in AWS. They require a durable stora ge backup for their corporate documents stored on- premises and a local cache that provides low latenc y access to their recently accessed data to reduce data egress charges. The documents must be stored to and retrieved from AWS via the Server Message Block (SMB) protocol. These files must immediately be acc essible within minutes for six months and archived for another decade to meet the data compliance. Which of the following is the best and most cost-ef fective approach to implement in this scenario?", "options": [ "A. Launch a new file gateway that connects to you r on-premises data center using AWS", "B. Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the", "C. Establish a Direct Connect connection to integ rate your on-premises network to your VPC.", "D. Launch a new tape gateway that connects to you r on-premises data center using AWS" ], "correct": "A. Launch a new file gateway that connects to you r on-premises data center using AWS", "explanation": "Explanation A file gateway supports a file interface into Amazo n Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve object s in Amazon S3 using industry-standard file protocols su ch as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Micros oft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as fil es or file share mount points. With a file gateway, you can do the following: - You can store and retrieve files directly using t he NFS version 3 or 4.1 protocol. - You can store and retrieve files directly using t he SMB file system version, 2 and 3 protocol. - You can access your data directly in Amazon S3 fr om any AWS Cloud application or service. - You can manage your Amazon S3 data using lifecycl e policies, cross-region replication, and versionin g. You can think of a file gateway as a file system mo unt on S3. AWS Storage Gateway supports the Amazon S3 Standard , Amazon S3 Standard-Infrequent Access, Amazon S3 One Zone-Infrequent Access and Amazon Gla cier storage classes. When you create or update a file share, you have the option to select a storage class for your objects. You can either choose the Amazon S3 Standard or any of the infrequent access storage cl asses such as S3 Standard IA or S3 One Zone IA. Objects stored in any of these storage classes can be transitioned to Amazon Glacier using a Lifecycle Policy. Although you can write objects directly from a file share to the S3-Standard-IA or S3-One Zone-IA stor age class, it is recommended that you use a Lifecycle P olicy to transition your objects rather than write directly from the file share, especially if you're expecting to u pdate or delete the object within 30 days of archiv ing it. Therefore, the correct answer is: Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the docume nts to the file gateway and set up a lifecycle policy to move the data into Glacier for data archi val. The option that says: Launch a new tape gateway tha t connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the ta pe gateway and set up a lifecycle policy to move the data into Glacier for archival is incorrec t because although tape gateways provide cost- effective and durable archive backup data in Amazon Glacier, it does not meet the criteria of being retrievable immediately within minutes. It also doe sn't maintain a local cache that provides low laten cy access to the recently accessed data and reduce data egres s charges. Thus, it is still better to set up a fil e gateway instead. The option that says: Establish a Direct Connect co nnection to integrate your on-premises network to your VPC. Upload the documents on Amazon EBS Volume s and use a lifecycle policy to automatically move the EBS snapshots to an S3 bucke t, and then later to Glacier for archival is incorrect because EBS Volumes are not as durable co mpared with S3 and it would be more cost-efficient if you directly store the documents to an S3 bucket. An al ternative solution is to use AWS Direct Connect wit h AWS Storage Gateway to create a connection for high-thr oughput workload needs, providing a dedicated network connection between your on-premis es file gateway and AWS. But this solution is using EBS, hence, this option is still wrong. The option that says: Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the documents to an S3 bucket and set up a l ifecycle policy to move the data into Glacier for archival is incorrect because Snowmobile is mainly used to migrate the entire data of an on-premises d ata center to AWS. This is not a suitable approach as t he company still has a hybrid cloud architecture wh ich means that they will still use their on-premises da ta center along with their AWS cloud infrastructure . References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/storagegateway/latest/u serguide/StorageGatewayConcepts.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", "references": "" }, { "question": ": A web application is using CloudFront to distribute their images, videos, and other static contents st ored in their S3 bucket to its users around the world. The compan y has recently introduced a new member-only access to some of its high quality media files. The re is a requirement to provide access to multiple p rivate media files only to their paying subscribers withou t having to change their current URLs. Which of the following is the most suitable solutio n that you should implement to satisfy this require ment?", "options": [ "A. Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy", "B. Create a Signed URL with a custom policy which only allows the members to see the private", "C. Configure your CloudFront distribution to use Field-Level Encryption to protect your", "D. Use Signed Cookies to control who can access t he private files in your CloudFront" ], "correct": "D. Use Signed Cookies to control who can access t he private files in your CloudFront", "explanation": "Explanation CloudFront signed URLs and signed cookies provide t he same basic functionality: they allow you to control who can access your content. If you want to serve private content through CloudFront and you'r e trying to decide whether to use signed URLs or signed cook ies, consider the following: Use signed URLs for the following cases: - You want to use an RTMP distribution. Signed cook ies aren't supported for RTMP distributions. - You want to restrict access to individual files, for example, an installation download for your appl ication. - Your users are using a client (for example, a cus tom HTTP client) that doesn't support cookies. Use signed cookies for the following cases: - You want to provide access to multiple restricted files, for example, all of the files for a video i n HLS format or all of the files in the subscribers' area of a website. - You don't want to change your current URLs. Hence, the correct answer for this scenario is the option that says: Use Signed Cookies to control who can access the private files in your CloudFront distrib ution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them. The option that says: Configure your CloudFront dis tribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member is incorrect because a Mat ch Viewer is an Origin Protocol Policy which configure s CloudFront to communicate with your origin using HTTP or HTTPS, depending on the protocol of the vie wer request. CloudFront caches the object only once even if viewers make requests using both HTTP and H TTPS protocols. The option that says: Create a Signed URL with a cu stom policy which only allows the members to see the private files is incorrect because Signed URLs are primarily used for providing access to individu al files, as shown on the above explanation. In additi on, the scenario explicitly says that they don't wa nt to change their current URLs which is why implementing Signed Cookies is more suitable than Signed URL. The option that says: Configure your CloudFront dis tribution to use Field-Level Encryption to protect your private data and only allow access to members is incorrect because Field-Level Encryption only allows you to securely upload user-submitted sensit ive information to your web servers. It does not pr ovide access to download multiple private files.", "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-choosing-signed- urls-cookies.html https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-signed- cookies.html Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ Exam B" }, { "question": ": A company is hosting its web application in an Auto Scaling group of EC2 instances behind an Applicati on Load Balancer. Recently, the Solutions Architect id entified a series of SQL injection attempts and cro ss- site scripting attacks to the application, which ha d adversely affected their production data. Which of the following should the Architect impleme nt to mitigate this kind of attack?", "options": [ "A. Using AWS Firewall Manager, set up security ru les that block SQL injection and cross-site", "B. Use Amazon GuardDuty to prevent any further SQ L injection and cross-site scripting", "C. Set up security rules that block SQL injection and cross-site scripting attacks in AWS Web", "D. Block all the IP addresses where the SQL injec tion and cross-site scripting attacks" ], "correct": "C. Set up security rules that block SQL injection and cross-site scripting attacks in AWS Web", "explanation": "Explanation AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon Clou dFront or an Application Load Balancer. AWS WAF also lets you control access to your content. B ased on conditions that you specify, such as the IP addresses that requests originate from or the value s of query strings, API Gateway, CloudFront or an Application Load Balancer responds to requests eith er with the requested content or with an HTTP 403 status code (Forbidden). You also can configure Clo udFront to return a custom error page when a reques t is blocked. At the simplest level, AWS WAF lets you choose one of the following behaviors: Allow all requests except the ones that you specify This is useful when you want CloudFront or an Application Load Balancer to serve content for a pu blic website, but you also want to block requests f rom attackers. Block all requests except the ones that you specify This is useful when you want to serve content for a restricted website whose users are readily identifi able by properties in web requests, such as the IP addresses that they use to browse to the website. Count the requests that match the properties that y ou specify When you want to allow or block requests based on new properties in web requests, y ou first can configure AWS WAF to count the request s that match those properties without allowing or blo cking those requests. This lets you confirm that yo u didn't accidentally configure AWS WAF to block all the traffic to your website. When you're confident that you specified t he correct properties, you can change the behavior to allow or block requests. Hence, the correct answer in this scenario is: Set up security rules that block SQL injection and cros s- site scripting attacks in AWS Web Application Firewall ( WAF). Associate the rules to the Application Load Balancer. Using Amazon GuardDuty to prevent any further SQL i njection and cross-site scripting attacks in your application is incorrect because Amazon GuardD uty is just a threat detection service that continuously monitors for malicious activity and un authorized behavior to protect your AWS accounts an d workloads. Using AWS Firewall Manager to set up security rules that block SQL injection and cross-site scripting attacks, then associating the rules to th e Application Load Balancer is incorrect because AWS Firewall Manager just simplifies your AWS WAF a nd AWS Shield Advanced administration and maintenance tasks across multiple accounts and reso urces. Blocking all the IP addresses where the SQL injecti on and cross-site scripting attacks originated using the Network Access Control List is incorrect because this is an optional layer of security for y our VPC that acts as a firewall for controlling traffic in and o ut of one or more subnets. NACLs are not effective in blocking SQL injection and cross-site scripting attacks References: https://aws.amazon.com/waf/ https://docs.aws.amazon.com/waf/latest/developergui de/what-is-aws-waf.html Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", "references": "" }, { "question": ": An insurance company utilizes SAP HANA for its day- to-day ERP operations. Since they can't migrate this database due to customer preferences, they nee d to integrate it with the current AWS workload in the VPC in which they are required to establish a site-to-s ite VPN connection. What needs to be configured outside of the VPC for them to have a successful site-to-site VPN connecti on?", "options": [ "A. An EIP to the Virtual Private Gateway", "B. The main route table in your VPC to route traf fic through a NAT instance", "C. A dedicated NAT instance in a public subnet", "D. An Internet-routable IP address (static) of th e customer gateway's external interface for the" ], "correct": "D. An Internet-routable IP address (static) of th e customer gateway's external interface for the", "explanation": "Explanation By default, instances that you launch into a virtua l private cloud (VPC) can't communicate with your o wn network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your s ecurity group rules, and creating an AWS managed VP N connection. Although the term VPN connection is a general term, in the Amazon VPC documentation, a VPN connection refers to the connection between your VP C and your own network. AWS supports Internet Protocol security (IPsec) VPN connections. A customer gateway is a physical device or software application on your side of the VPN connection. To create a VPN connection, you must create a custo mer gateway resource in AWS, which provides information to AWS about your customer gateway devi ce. Next, you have to set up an Internet-routable I P address (static) of the customer gateway's external interface. The following diagram illustrates single VPN connec tions. The VPC has an attached virtual private gateway, and your remote network includes a custome r gateway, which you must configure to enable the VPN connection. You set up the routing so that any traffic from the VPC bound for your network is rout ed to the virtual private gateway. The options that say: A dedicated NAT instance in a public subnet and the main route table in your VPC to route traffic through a NAT instance are inc orrect since you don't need a NAT instance for you to be able to create a VPN connection. An EIP to the Virtual Private Gateway is incorrect since you do not attach an EIP to a VPG. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_VPN.html https://docs.aws.amazon.com/vpc/latest/userguide/Se tUpVPNConnections.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company has a data analytics application that upd ates a real-time, foreign exchange dashboard and another separate application that archives data to Amazon Redshift. Both applications are configured t o consume data from the same stream concurrently and independently by using Amazon Kinesis Data Streams. However, they noticed that there are a lot of occurrences where a shard iterator expires unexpectedly. Upon checking, they found out that th e DynamoDB table used by Kinesis does not have enough capacity to store the lease data. Which of the following is the most suitable solutio n to rectify this issue?", "options": [ "A. Use Amazon Kinesis Data Analytics to properly support the data analytics application", "B. Upgrade the storage capacity of the DynamoDB t able.", "C. Increase the write capacity assigned to the sh ard table.", "D. Enable In-Memory Acceleration with DynamoDB Ac celerator (DAX)." ], "correct": "C. Increase the write capacity assigned to the sh ard table.", "explanation": "Explanation A new shard iterator is returned by every GetRecord s request (as NextShardIterator), which you then us e in the next GetRecords request (as ShardIterator). Typically, this shard iterator does not expire befo re you use it. However, you may find that shard iterators expire because you have not called GetRecords for m ore than 5 minutes, or because you've performed a resta rt of your consumer application. If the shard iterator expires immediately before yo u can use it, this might indicate that the DynamoDB table used by Kinesis does not have enough capacity to st ore the lease data. This situation is more likely t o happen if you have a large number of shards. To sol ve this problem, increase the write capacity assign ed to the shard table. Hence, increasing the write capacity assigned to th e shard table is the correct answer. Upgrading the storage capacity of the DynamoDB tabl e is incorrect because DynamoDB is a fully managed service which automatically scales its stor age, without setting it up manually. The scenario r efers to the write capacity of the shard table as it says that the DynamoDB table used by Kinesis does not h ave enough capacity to store the lease data. Enabling In-Memory Acceleration with DynamoDB Accel erator (DAX) is incorrect because the DAX feature is primarily used for read performance impr ovement of your DynamoDB table from milliseconds response time to microseconds. It does not have any relationship with Amazon Kinesis Data Stream in th is scenario. Using Amazon Kinesis Data Analytics to properly sup port the data analytics application instead of Kinesis Data Stream is incorrect. Although Amazon K inesis Data Analytics can support a data analytics application, it is still not a suitable solution fo r this issue. You simply need to increase the write capacity assigned to the shard table in order to rectify the problem which is why switching to Amazon Kinesis D ata Analytics is not necessary.", "references": "https://docs.aws.amazon.com/streams/latest/dev/kine sis-record-processor-ddb.html https://docs.aws.amazon.com/streams/latest/dev/trou bleshooting-consumers.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" }, { "question": ": A web application, which is used by your clients ar ound the world, is hosted in an Auto Scaling group of EC2 instances behind a Classic Load Balancer. You n eed to secure your application by allowing multiple domains to serve SSL traffic over the same IP addre ss. Which of the following should you do to meet the ab ove requirement?", "options": [ "A. Use an Elastic IP and upload multiple 3rd part y certificates in your Classic Load Balancer", "B. Use Server Name Indication (SNI) on your Class ic Load Balancer by adding multiple SSL", "C. Generate an SSL certificate with AWS Certifica te Manager and create a CloudFront web" ], "correct": "C. Generate an SSL certificate with AWS Certifica te Manager and create a CloudFront web", "explanation": "Explanation Amazon CloudFront delivers your content from each e dge location and offers the same security as the Dedicated IP Custom SSL feature. SNI Custom SSL wor ks with most modern browsers, including Chrome version 6 and later (running on Windows XP and late r or OS X 10.5.7 and later), Safari version 3 and l ater (running on Windows Vista and later or Mac OS X 10. 5.6. and later), Firefox 2.0 and later, and Interne t Explorer 7 and later (running on Windows Vista and later). Some users may not be able to access your content b ecause some older browsers do not support SNI and will not be able to establish a connection with Clo udFront to load the HTTPS version of your content. If you need to support non-SNI compliant browsers for HTTPS content, it is recommended to use the Dedicated IP Custom SSL feature. Using Server Name Indication (SNI) on your Classic Load Balancer by adding multiple SSL certificates to allow multiple domains to serve SSL traffic is incorrect because a Classic Load Balanc er does not support Server Name Indication (SNI). You have to use an Application Load Balancer instead or a CloudFront web distribution to allow the SNI featur e. Using an Elastic IP and uploading multiple 3rd part y certificates in your Application Load Balancer using the AWS Certificate Manager is incorrect beca use just like in the above, a Classic Load Balancer does not support Server Name Indication (SNI) and t he use of an Elastic IP is not a suitable solution to allow multiple domains to serve SSL traffic. You ha ve to use Server Name Indication (SNI). The option that says: It is not possible to allow m ultiple domains to serve SSL traffic over the same IP address in AWS is incorrect because AWS does suppor t the use of Server Name Indication (SNI). References: https://aws.amazon.com/about-aws/whats-new/2014/03/ 05/amazon-cloudront-announces-sni-custom-ssl/ https://aws.amazon.com/blogs/security/how-to-help-a chieve-mobile-app-transport-security-compliance-by- using-amazon-cloudfront-and-aws-certificate-manager / Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ SNI Custom SSL vs Dedicated IP Custom SSL: https://tutorialsdojo.com/sni-custom-ssl-vs-dedicat ed-ip-custom-ssl/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", "references": "" }, { "question": ": A company has two On-Demand EC2 instances inside th e Virtual Private Cloud in the same Availability Zone but are deployed to different subnets. One EC2 instance is running a database and the other EC2 instance a web application that connects with the d atabase. You need to ensure that these two instance s can communicate with each other for the system to work properly. What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Select TWO.)", "options": [ "A. Ensure that the EC2 instances are in the same Placement Group.", "B. Check if all security groups are set to allow the application host to communicate to the", "C. Check if both instances are the same instance class.", "D. Check if the default route is set to a NAT ins tance or Internet Gateway (IGW) for them to" ], "correct": "", "explanation": "Explanation First, the Network ACL should be properly set to al low communication between the two subnets. The security group should also be properly configured s o that your web server can communicate with the database server. Hence, these are the correct answers: Check if all security groups are set to allow the a pplication host to communicate to the database on the right port and protocol. Check the Network ACL if it allows communication be tween the two subnets. The option that says: Check if both instances are t he same instance class is incorrect because the EC2 instances do not need to be of the same class in or der to communicate with each other. The option that says: Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate is incorrect because an Int ernet gateway is primarily used to communicate to t he Internet. The option that says: Ensure that the EC2 instances are in the same Placement Group is incorrect because Placement Group is mainly used to provide l ow-latency network performance necessary for tightly-coupled node-to-node communication.", "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Subnets.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" }, { "question": ": As part of the Business Continuity Plan of your com pany, your IT Director instructed you to set up an automated backup of all of the EBS Volumes for your EC2 instances as soon as possible. What is the fastest and most cost-effective solutio n to automatically back up all of your EBS Volumes?", "options": [ "A. Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS", "B. Set your Amazon Storage Gateway with EBS volum es as the data source and store the", "C. Use an EBS-cycle policy in Amazon S3 to automa tically back up the EBS volumes.", "D. For an automated solution, create a scheduled job that calls the \"create-snapshot\"" ], "correct": "A. Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS", "explanation": "Explanation You can use Amazon Data Lifecycle Manager (Amazon D LM) to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes. Automating snapshot management helps you to: - Protect valuable data by enforcing a regular back up schedule. - Retain backups as required by auditors or interna l compliance. - Reduce storage costs by deleting outdated backups . Combined with the monitoring features of Amazon Clo udWatch Events and AWS CloudTrail, Amazon DLM provides a complete backup solution for EBS vol umes at no additional cost. Hence, using Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS snapshots is the correct answer as it is the fastes t and most cost-effective solution that provides an automated way of backing up your EBS volumes. The option that says: For an automated solution, cr eate a scheduled job that calls the \"create- snapshot\" command via the AWS CLI to take a snapsho t of production EBS volumes periodically is incorrect because even though this is a valid solut ion, you would still need additional time to create a scheduled job that calls the \"create-snapshot\" comm and. It would be better to use Amazon Data Lifecycl e Manager (Amazon DLM) instead as this provides you t he fastest solution which enables you to automate the creation, retention, and deletion of the EBS sn apshots without having to write custom shell script s or creating scheduled jobs. Setting your Amazon Storage Gateway with EBS volume s as the data source and storing the backups in your on-premises servers through the storage gat eway is incorrect as the Amazon Storage Gateway is used only for creating a backup of data from your o n-premises server and not from the Amazon Virtual Private Cloud. Using an EBS-cycle policy in Amazon S3 to automatic ally back up the EBS volumes is incorrect as there is no such thing as EBS-cycle policy in Amazo n S3. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /snapshot-lifecycle.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ebs-creating-snapshot.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw&t=8s", "references": "" }, { "question": ": A website that consists of HTML, CSS, and other cli ent-side Javascript will be hosted on the AWS environment. Several high-resolution images will be displayed on the webpage. The website and the phot os should have the optimal loading response times as p ossible, and should also be able to scale to high r equest rates. Which of the following architectures can provide th e most cost-effective and fastest loading experienc e?", "options": [ "A. Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web server,", "B. Create a Nginx web server in an Amazon LightSa il instance to host the HTML, CSS, and Javascript files then enable caching. Upload the im ages in an S3 bucket. Use CloudFront as a", "C. Upload the HTML, CSS, Javascript, and the imag es in a single bucket. Then enable website", "D. Create a Nginx web server in an EC2 instance t o host the HTML, CSS, and Javascript files" ], "correct": "C. Upload the HTML, CSS, Javascript, and the imag es in a single bucket. Then enable website", "explanation": "Explanation Amazon S3 is an object storage service that offers industry-leading scalability, data availability, se curity, and performance. Additionally, You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as yo u wish, with no compromise on performance or reliability. Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds. CloudFront can be integrated with Amazon S3 for fast delivery of data originating from an S3 bucket to your end-users. B y design, delivering data out of CloudFront can be more cost- effective than delivering it from S3 directly to yo ur users. The scenario given is about storing and hosting ima ges and a static website respectively. Since we are just dealing with static content, we can leverage the we b hosting feature of S3. Then we can improve the architecture further by integrating it with CloudFr ont. This way, users will be able to load both the web pages and images faster than if we are serving them from a standard webserver. Hence, the correct answer is: Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable website hosting. Create a CloudFront di stribution and point the domain on the S3 website endpoint. The option that says: Create an Nginx web server in an EC2 instance to host the HTML, CSS, and Javascript files then enable caching. Upload the im ages in a S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end-users is inco rrect. Creating your own web server just to host a static website in AWS is a costly solution. Web Servers on an EC2 instance is usually used for hosting dynami c web applications. Since static websites contain web pag es with fixed content, we should use S3 website hos ting instead. The option that says: Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web server, then configure the scaling policy accor dingly. Store the images in an Elastic Block Store. Then, point your instance's endpoint to AWS Global Accelerator is incorrect. This is how we serve static websites in the old days. Now, with the help of S3 website hosting, we can host our static cont ents from a durable, high-availability, and highly scalable env ironment without managing any servers. Hosting stat ic websites in S3 is cheaper than hosting it in an EC2 instance. In addition, Using ASG for scaling insta nces that host a static website is an over-engineered solutio n that carries unnecessary costs. S3 automatically scales to high requests and you only pay for what you use. The option that says: Create an Nginx web server in an Amazon LightSail instance to host the HTML, CSS, and Javascript files then enable caching. Uplo ad the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to your end-user s is incorrect because although LightSail is cheape r than EC2, creating your own LightSail web server for hos ting static websites is still a relatively expensiv e solution when compared to hosting it on S3. In addition, S3 automatically scales to high request rates. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Web siteHosting.html https://aws.amazon.com/blogs/networking-and-content -delivery/amazon-s3-amazon-cloudfront-a-match- made-in-the-cloud/ Check out these Amazon S3 and CloudFront Cheat Shee ts: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-cloudfront/", "references": "" }, { "question": ": You have built a web application that checks for ne w items in an S3 bucket once every hour. If new items exist, a message is added to an SQS queue. Yo u have a fleet of EC2 instances which retrieve messages from the SQS queue, process the file, and finally, send you and the user an email confirmation that the item has been successfully pr ocessed. Your officemate uploaded one test file to the S3 bucket and after a couple of hours, you noti ced that you and your officemate have 50 emails from your application with the same message. Which of the following is most likely the root cause why the application has sent you and the user multi ple emails?", "options": [ "A. There is a bug in the application.", "B. By default, SQS automatically deletes the message s that were processed by the consumers. It", "D. Your application does not issue a delete command to the SQS queue after processing the message," ], "correct": "D. Your application does not issue a delete command to the SQS queue after processing the message,", "explanation": "Explanation In this scenario, the main culprit is that your app lication does not issue a delete command to the SQS queue after processing the message, which is why this mes sage went back to the queue and was processed multiple times. The option that says: The sqsSendEmailMessage attri bute of the SQS queue is configured to 50 is incorrect as there is no sqsSendEmailMessage attrib ute in SQS. The option that says: There is a bug in the applica tion is a valid answer but since the scenario did n ot mention that the EC2 instances deleted the processed messag es, the most likely cause of the problem is that th e application does not issue a delete command to the SQS queue as mentioned above. The option that says: By default, SQS automatically deletes the messages that were processed by the consumers. It might be possible that your officemat e has submitted the request 50 times which is why you received a lot of emails is incorrect as SQS do es not automatically delete the messages.", "references": "https://aws.amazon.com/sqs/faqs/ Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" }, { "question": ": A Network Architect developed a food ordering appli cation. The Architect needs to retrieve the instanc e ID, public keys, and public IP address of the EC2 serve r made for tagging and grouping the attributes into the internal application running on-premises. Which of the following options fulfills this requir ement?", "options": [ "A. Amazon Machine Image", "B. Instance user data", "C. Resource tags", "D. Instance metadata" ], "correct": "D. Instance metadata", "explanation": "Explanation Instance metadata is the data about your instance t hat you can use to configure or manage the running instance. You can get the instance ID, public keys, public IP address and many other information from the instance metadata by firing a URL command in your i nstance to this URL: http://169.254.169.254/latest/meta-data/ Instance user data is incorrect because this is mai nly used to perform common automated configuration tasks and run scripts after the instance starts. Resource tags is incorrect because these are labels that you assign to an AWS resource. Each tag consi sts of a key and an optional value, both of which you d efine. Amazon Machine Image is incorrect because this main ly provides the information required to launch an instance, which is a virtual server in the cloud.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.htm Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" }, { "question": ": A DevOps Engineer is required to design a cloud arc hitecture in AWS. The Engineer is planning to devel op a highly available and fault-tolerant architecture th at is composed of an Elastic Load Balancer and an A uto Scaling group of EC2 instances deployed across mult iple Availability Zones. This will be used by an on line accounting application that requires path-based rou ting, host-based routing, and bi-directional communication channels using WebSockets. Which is the most suitable type of Elastic Load Bal ancer that will satisfy the given requirement?", "options": [ "A. Gateway Load Balancer", "B. Network Load Balancer", "C. Application Load Balancer", "D. Classic Load Balancer" ], "correct": "C. Application Load Balancer", "explanation": "Explanation Application Load Balancer operates at the request l evel (layer 7), routing traffic to targets (EC2 ins tances, containers, IP addresses, and Lambda functions) bas ed on the content of the request. Ideal for advance d load balancing of HTTP and HTTPS traffic, Applicati on Load Balancer provides advanced request routing targeted at delivery of modern application architec tures, including microservices and container-based applications. Application Load Balancer simplifies and improves the security of your application, by ensuring that the latest SSL/TLS ciphers and protoc ols are used at all times. If your application is composed of several individu al services, an Application Load Balancer can route a request to a service based on the content of the request su ch as Host field, Path URL, HTTP header, HTTP metho d, Query string, or Source IP address. Host-based Routing: You can route a client request based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balanc er. Path-based Routing: You can route a client request based on the URL path of the HTTP header. HTTP header-based routing: You can route a client r equest based on the value of any standard or custom HTTP header. HTTP method-based routing: You can route a client r equest based on any standard or custom HTTP method. Query string parameter-based routing: You can route a client request based on query string or query parameters. Source IP address CIDR-based routing: You can route a client request based on source IP address CIDR from where the request originates. Application Load Balancers support path-based routi ng, host-based routing, and support for containeriz ed applications hence, Application Load Balancer is th e correct answer. Network Load Balancer is incorrect. Although it can handle WebSockets connections, it doesn't support path-based routing or host-based routing, unlike an Application Load Balancer. Classic Load Balancer is incorrect because this typ e of load balancer is intended for applications tha t are built within the EC2-Classic network only. A CLB doesn't support path-based routing or host-based routing. Gateway Load Balancer is incorrect because this is primarily used for deploying, scaling, and running your third-party virtual appliances. It doesn't hav e a path-based routing or host-based routing featur e. References: https://aws.amazon.com/elasticloadbalancing/feature s https://aws.amazon.com/elasticloadbalancing/faqs/ AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", "references": "" }, { "question": ": A software company has resources hosted in AWS and on-premises servers. You have been requested to create a decoupled architecture for applications wh ich make use of both resources. Which of the follow ing options are valid? (Select TWO.)", "options": [ "A. Use SWF to utilize both on-premises servers an d EC2 instances for your decoupled", "B. Use SQS to utilize both on-premises servers an d EC2 instances for your decoupled", "C. Use RDS to utilize both on-premises servers an d EC2 instances for your decoupled", "D. Use DynamoDB to utilize both on-premises serve rs and EC2 instances for your decoupled" ], "correct": "", "explanation": "Explanation Amazon Simple Queue Service (SQS) and Amazon Simple Workflow Service (SWF) are the services that you can use for creating a decoupled architect ure in AWS. Decoupled architecture is a type of computing architecture that enables computing compo nents or layers to execute independently while stil l interfacing with each other. Amazon SQS offers reliable, highly-scalable hosted queues for storing messages while they travel betwe en applications or microservices. Amazon SQS lets you move data between distributed application components and helps you decouple these components. Amazon SWF is a web service that makes it easy to coordinate work across distributed application comp onents. Using RDS to utilize both on-premises servers and E C2 instances for your decoupled application and using DynamoDB to utilize both on-premises servers and EC2 instances for your decoupled application are incorrect as RDS and DynamoDB are d atabase services. Using VPC peering to connect both on-premises serve rs and EC2 instances for your decoupled application is incorrect because you can't create a VPC peering for your on-premises network and AWS VPC. References: https://aws.amazon.com/sqs/ http://docs.aws.amazon.com/amazonswf/latest/develop erguide/swf-welcome.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/ Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": A company developed a web application and deployed it on a fleet of EC2 instances that uses Amazon SQS . The requests are saved as messages in the SQS queue , which is configured with the maximum message retention period. However, after thirteen days of o peration, the web application suddenly crashed and there are 10,000 unprocessed messages that are still waiting in the queue. Since they developed the application, they can easily resolve the issue but they need to send a communication to the users on the issue. What information should they provide and what will happen to the unprocessed messages?", "options": [ "A. Tell the users that unfortunately, they have t o resubmit all the requests again.", "B. Tell the users that unfortunately, they have t o resubmit all of the requests since the queue", "C. Tell the users that the application will be op erational shortly however, requests sent over", "D. Tell the users that the application will be op erational shortly and all received requests will" ], "correct": "", "explanation": "Explanation In Amazon SQS, you can configure the message retent ion period to a value from 1 minute to 14 days. The default is 4 days. Once the message retention limit is reached, your messages are automatically delete d. A single Amazon SQS message queue can contain an un limited number of messages. However, there is a 120,000 limit for the number of inflight messages f or a standard queue and 20,000 for a FIFO queue. Messages are inflight after they have been received from the queue by a consuming component, but have not yet been deleted from the queue. In this scenario, it is stated that the SQS queue i s configured with the maximum message retention per iod. The maximum message retention in SQS is 14 days that is why the option that says: Tell the users that the application will be operational shortly and all rec eived requests will be processed after the web application is restarted is the correct answer i.e. there will be no missing messages. The options that say: Tell the users that unfortuna tely, they have to resubmit all the requests again and Tell the users that the application will be operational shor tly, however, requests sent over three days ago wil l need to be resubmitted are incorrect as there are no missing m essages in the queue thus, there is no need to resu bmit any previous requests. The option that says: Tell the users that unfortuna tely, they have to resubmit all of the requests sin ce the queue would not be able to process the 10,000 messages to gether is incorrect as the queue can contain an unlimited number of messages, not just 1 0,000 messages.", "references": "https://aws.amazon.com/sqs/ Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" }, { "question": ": A company developed a meal planning application tha t provides meal recommendations for the week as well as the food consumption of the users. The appl ication resides on an EC2 instance which requires access to various AWS services for its day-to-day o perations. Which of the following is the best way to allow the EC2 instance to access the S3 bucket and other AWS services?", "options": [ "A. Add the API Credentials in the Security Group and assign it to the EC2 instance.", "B. Store the API credentials in a bastion host.", "C. Create a role in IAM and assign it to the EC2 instance.", "D. Store the API credentials in the EC2 instance." ], "correct": "C. Create a role in IAM and assign it to the EC2 instance.", "explanation": "Explanation The best practice in handling API Credentials is to create a new role in the Identity Access Managemen t (IAM) service and then assign it to a specific EC2 instan ce. In this way, you have a secure and centralized way of storing and managing your credentials. Storing the API credentials in the EC2 instance, ad ding the API Credentials in the Security Group and assigning it to the EC2 instance, and storing t he API credentials in a bastion host are incorrect because it is not secure to store nor use the API c redentials from an EC2 instance. You should use IAM service instead.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ iam-roles-for-amazon-ec2.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" }, { "question": ": An organization stores and manages financial record s of various companies in its on-premises data cent er, which is almost out of space. The management decide d to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional securit y, all records must be prevented from being deleted or overwritten . Which of the following should you do to meet the ab ove requirement?", "options": [ "A. Use AWS DataSync to move the data. Store all o f your data in Amazon EFS and enable", "B. Use AWS Storage Gateway to establish hybrid cl oud storage. Store all of your data in", "D. Use AWS DataSync to move the data. Store all o f your data in Amazon S3 and enable object" ], "correct": "D. Use AWS DataSync to move the data. Store all o f your data in Amazon S3 and enable object", "explanation": "Explanation AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools, or license and ma nage expensive commercial network acceleration software. You can use DataSync to migrate active da ta to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises sto rage capacity, or replicate data to AWS for busines s continuity. AWS DataSync enables you to migrate your on-premise s data to Amazon S3, Amazon EFS, and Amazon FSx for Windows File Server. You can configure Data Sync to make an initial copy of your entire dataset , and schedule subsequent incremental transfers of changing data t owards Amazon S3. Enabling S3 Object Lock prevents your existing and future records from being deleted or overwritten. AWS DataSync is primarily used to migrate existing data to Amazon S3. On the other hand, AWS Storage Gateway is more suitable if you still want to retai n access to the migrated data and for ongoing updat es from your on-premises file-based applications. Hence, the correct answer in this scenario is: Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock. The option that says: Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock is incorrect because Amazon EFS only supports file locking. Object lock is a featur e of Amazon S3 and not Amazon EFS. The options that says: Use AWS Storage Gateway to e stablish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock is incorre ct because the scenario requires that all of the existing records must be migrated to AWS. The futur e records will also be stored in AWS and not in the on- premises network. This means that setting up a hybr id cloud storage is not necessary since the on- premises storage will no longer be used. The option that says: Use AWS Storage Gateway to es tablish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock is incorr ect because Amazon EBS does not support object lock. Amazon S3 is the only service capable of lock ing objects to prevent an object from being deleted or overwritten. References: https://aws.amazon.com/datasync/faqs/ https://docs.aws.amazon.com/datasync/latest/usergui de/what-is-datasync.html https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lock.html Check out this AWS DataSync Cheat Sheet: https://tutorialsdojo.com/aws-datasync/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", "references": "" }, { "question": ": A Solutions Architect created a new Standard-class S3 bucket to store financial reports that are not frequently accessed but should immediately be avail able when an auditor requests them. To save costs, the Architect changed the storage class of the S3 bucke t from Standard to Infrequent Access storage class. In Amazon S3 Standard - Infrequent Access storage c lass, which of the following statements are true? (Select TWO.)", "options": [ "A. Ideal to use for data archiving.", "B. It is designed for data that is accessed less frequently.", "C. It provides high latency and low throughput pe rformance", "D. It is designed for data that requires rapid ac cess when needed." ], "correct": "", "explanation": "Explanation Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durabil ity, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieva l fee. This combination of low cost and high performance m ake Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the obje ct level and can exist in the same bucket as Standard, allow ing you to use lifecycle policies to automatically transition objects between storage classes without any application changes. Key Features: - Same low latency and high throughput performance of Standard - Designed for durability of 99.999999999% of objec ts - Designed for 99.9% availability over a given year- Backed with the Amazon S3 Service Level Agreement for availability - Supports SSL encryption of data in transit and at rest - Lifecycle management for automatic migration of o bjects Hence, the correct answers are: - It is designed for data that is accessed less fre quently. - It is designed for data that requires rapid acces s when needed. The option that says: It automatically moves data t o the most cost-effective access tier without any operational overhead is incorrect as it actually re fers to Amazon S3 - Intelligent Tiering, which is t he only cloud storage class that delivers automatic cost savings by moving objects between different access tiers wh en access patterns change. The option that says: It provides high latency and low throughput performance is incorrect as it shoul d be \"low latency\" and \"high throughput\" instead. S3 automati cally scales performance to meet user demands. The option that says: Ideal to use for data archivi ng is incorrect because this statement refers to Am azon S3 Glacier. Glacier is a secure, durable, and extremel y low-cost cloud storage service for data archiving and long- term backup. References: https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/faqs Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A media company is setting up an ECS batch architec ture for its image processing application. It will be hosted in an Amazon ECS Cluster with two ECS tasks that wi ll handle image uploads from the users and image processing. The first ECS task will process t he user requests, store the image in an S3 input bu cket, and push a message to a queue. The second task reads fr om the queue, parses the message containing the obj ect name, and then downloads the object. Once the image is processed and transformed, it will upload the o bjects to the S3 output bucket. To complete the architectu re, the Solutions Architect must create a queue and the necessary IAM permissions for the ECS tasks. Which of the following should the Architect do next ?", "options": [ "A. Launch a new Amazon Kinesis Data Firehose and configure the second ECS task to read", "B. Launch a new Amazon AppStream 2.0 queue and co nfigure the second ECS task to read", "D. a new Amazon MQ queue and configure the second ECS task to read from it. Create an" ], "correct": "", "explanation": "Explanation Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived an d embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS ta sk. Amazon ECS supports batch jobs. You can use Amazon ECS Run Task action to run one or more tasks once. The Run Task action starts the ECS task on an instance that meets the task's requirements includ ing CPU, memory, and ports. For example, you can set up an ECS Batch architectu re for an image processing application. You can set up an AWS CloudFormation template that creates an Amazon S3 bucket, an Amazon SQS queue, an Amazon CloudWatch alarm, an ECS cluster, and an ECS task d efinition. Objects uploaded to the input S3 bucket trigger an event that sends object details to the S QS queue. The ECS task deploys a Docker container t hat reads from that queue, parses the message containin g the object name and then downloads the object. On ce transformed it will upload the objects to the S3 ou tput bucket. By using the SQS queue as the location for all obje ct details, you can take advantage of its scalabili ty and reliability as the queue will automatically scale b ased on the incoming messages and message retention can be configured. The ECS Cluster will then be able to sc ale services up or down based on the number of messages in the queue. You have to create an IAM Role that the ECS task as sumes in order to get access to the S3 buckets and SQS queue. Note that the permissions of the IAM rol e don't specify the S3 bucket ARN for the incoming bucket. This is to avoid a circular dependency issu e in the CloudFormation template. You should always make sure to assign the least amount of privileges needed to an IAM role. Hence, the correct answer is: Launch a new Amazon S QS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 bucket s and SQS queue. Declare the IAM Role (taskRoleArn) i n the task definition. The option that says: Launch a new Amazon AppStream 2.0 queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 bucket s and AppStream 2.0 queue. Declare the IAM Role (task RoleArn) in the task definition is incorrect because Amazon AppStream 2.0 is a fully managed app lication streaming service and can't be used as a queue. You have to use Amazon SQS instead. The option that says: Launch a new Amazon Kinesis D ata Firehose and configure the second ECS task to read from it. Create an IAM role that the ECS ta sks can assume in order to get access to the S3 buckets and Kinesis Data Firehose. Specify the ARN of the IAM Role in the (taskDefinitionArn) field of the task definition is incorrect because Amazon Kin esis Data Firehose is a fully managed service for delivering real-time streaming data. Although it ca n stream data to an S3 bucket, it is not suitable t o be used as a queue for a batch application in this scenario . In addition, the ARN of the IAM Role should be declared in the taskRoleArn and not in the taskDefi nitionArn field. The option that says: Launch a new Amazon MQ queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assum e in order to get access to the S3 buckets and Amazon MQ queue. Set the (EnableTaskIAMRole) option to true in the task definition is incorrect because Amazon MQ is primarily used as a managed me ssage broker service and not a queue. The EnableTaskIAMRole option is only applicable for Win dows-based ECS Tasks that require extra configuration. References: https://github.com/aws-samples/ecs-refarch-batch-pr ocessing https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/common_use_cases.html https://aws.amazon.com/ecs/faqs/", "references": "" }, { "question": ": A company has a top priority requirement to monitor a few database metrics and then afterward, send em ail notifications to the Operations team in case there is an issue. Which AWS services can accomplish this certkingdom requirement? (Select TWO.) certkingdom", "options": [ "A. Amazon EC2 Instance with a running Berkeley In ternet Name Domain (BIND) Server. certkingdom", "B. Amazon CloudWatch", "C. Simple Notification Service (SNS)", "D. Amazon Simple Email Service" ], "correct": "", "explanation": "Explanation certkingdom certkingdom certkingdom Amazon CloudWatch and Amazon Simple Notification Se rvice (SNS) are correct. In this requirement, certkingdom you can use Amazon CloudWatch to monitor the databa se and then Amazon SNS to send the emails to the certkingdom Operations team. Take note that you should use SNS instead of SES (Simple Email Service) when you certkingdom certkingdom want to monitor your EC2 instances. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providin g certkingdom you with a unified view of AWS resources, applicati ons, and services that run on AWS, and on-premises certkingdom servers. certkingdom certkingdom certkingdom SNS is a highly available, durable, secure, fully m anaged pub/sub messaging service that enables you t o certkingdom decouple microservices, distributed systems, and se rverless applications. certkingdom certkingdom certkingdom certkingdom certkingdom Amazon Simple Email Service is incorrect. SES is a cloud-based email sending service designed to send notification and transactional emails. Amazon Simple Queue Service (SQS) is incorrect. SQS is a fully-managed message queuing service. It does not monitor applications nor send email notifi cations unlike SES. Amazon EC2 Instance with a running Berkeley Interne t Name Domain (BIND) Server is incorrect because BIND is primarily used as a Domain Name Sys tem (DNS) web service. This is only applicable if you have a private References: certkingdom https://aws.amazon.com/cloudwatch/ certkingdom certkingdom https://aws.amazon.com/sns/ certkingdom certkingdom Check out this Amazon CloudWatch Cheat Sheet: certkingdom certkingdom https://tutorialsdojo.com/amazon-cloudwatch/ certkingdom certkingdom", "references": "" }, { "question": ": certkingdom certkingdom A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 certkingdom only contains private subnets while VPC-2 only cont ains public subnets. The company uses a single AWS certkingdom Direct Connect connection and a virtual interface t o connect their on-premises network with VPC-1. Whi ch certkingdom certkingdom of the following options increase the fault toleran ce of the connection to VPC-1? (Select TWO.) certkingdom certkingdom", "options": [ "A. Use the AWS VPN CloudHub to create a new AWS Dire ct Connect connection and private virtual", "B. Establish another AWS Direct Connect connection a nd private virtual interface in the same AWS", "C. Establish a hardware VPN over the Internet betwee n VPC-2 and the on-premises network.", "D. Establish a hardware VPN over the Internet betwee n VPC-1 and the on-premises network. certkingdom" ], "correct": "", "explanation": "Explanation/Reference: certkingdom certkingdom Explanation certkingdom certkingdom In this scenario, you have two VPCs which have peer ing connections with each other. Note that a VPC certkingdom peering connection does not support edge to edge ro uting. This means that if either VPC in a peering certkingdom certkingdom certkingdom certkingdom certkingdom relationship has one of the following connections, you cannot extend the peering relationship to that connection: - A VPN connection or an AWS Direct Connect connect ion to a corporate network - An Internet connection through an Internet gatewa y - An Internet connection in a private subnet throug h a NAT device - A gateway VPC endpoint to an AWS service; for exa mple, an endpoint to Amazon S3. - (IPv6) A ClassicLink connection. You can enable I Pv4 communication between a linked EC2-Classic instance and instances in a VPC on the other side o f a VPC peering connection. However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6 communication. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom For example, if VPC A and VPC B are peered, and VPC A has any of these connections, then instances in certkingdom VPC B cannot use the connection to access resources on the other side of the connection. Similarly, certkingdom resources on the other side of a connection cannot use the connection to access VPC B. certkingdom certkingdom Hence, this means that you cannot use VPC-2 to exte nd the peering relationship that exists between VPC -1 certkingdom and the on-premises network. For example, traffic f rom the corporate network can't directly access VPC -1 certkingdom by using the VPN connection or the AWS Direct Conne ct connection to VPC-2, which is why the certkingdom following options are incorrect: certkingdom certkingdom - Use the AWS VPN CloudHub to create a new AWS Dire ct Connect connection and private virtual certkingd om certkingdom interface in the same region as VPC-2. certkingdom certkingdom - Establish a hardware VPN over the Internet betwee n VPC-2 and the on-premises network. certkingdom certkingdom - Establish a new AWS Direct Connect connection and private virtual interface in the same region as certkingdom VPC-2. certkingdom certkingdom You can do the following to provide a highly availa ble, fault-tolerant network connection: certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom - Establish a hardware VPN over the Internet betwee n the VPC and the on-premises network. - Establish another AWS Direct Connect connection a nd private virtual interface in the same AWS region. References: https://docs.aws.amazon.com/vpc/latest/peering/inva lid-peering-configurations.html#edge-to-edge-vgw https://aws.amazon.com/premiumsupport/knowledge-cen ter/configure-vpn-backup-dx/ https://aws.amazon.com/answers/networking/aws-multi ple-data-center-ha-network-connectivity/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ certkingdom", "references": "" }, { "question": ": certkingdom A Solutions Architect of a multinational gaming com pany develops video games for PS4, Xbox One, and certkingdom Nintendo Switch consoles, plus a number of mobile g ames for Android and iOS. Due to the wide range of certkingdom their products and services, the architect proposed that they use API Gateway. certkingdom certkingdom What are the key features of API Gateway that the a rchitect can tell to the client? (Select TWO.) certkingdom certkingdom", "options": [ "A. Enables you to run applications requiring high levels of inter-node communications at scale", "B. It automatically provides a query language for your APIs similar to GraphQL.", "C. You pay only for the API calls you receive and the amount of data transferred out.", "D. Provides you with static anycast IP addresses that serve as a fixed entry point to your" ], "correct": "", "explanation": "Explanation certkingdom certkingdom certkingdom Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, certkingdom maintain, monitor, and secure APIs at any scale. Wi th a few clicks in the AWS Management Console, you certkingdom can create an API that acts as a \"front door\" for a pplications to access data, business logic, or func tionality certkingdom from your back-end services, such as workloads runn ing on Amazon Elastic Compute Cloud (Amazon certkingdom EC2), code running on AWS Lambda, or any web applic ation. Since it can use AWS Lambda, you can run certkingdom your APIs without servers. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom Amazon API Gateway handles all the tasks involved i n accepting and processing up to hundreds of certkingdom certkingdom thousands of concurrent API calls, including traffi c management, authorization and access control, certkingdom monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. certkingdom You pay only for the API calls you receive and the amount of data transferred out. certkingdom certkingdom Hence, the correct answers are: certkingdom certkingdom - Enables you to build RESTful APIs and WebSocket A PIs that are optimized for serverless certkingdom workloads certkingdom certkingdom - You pay only for the API calls you receive and th e amount of data transferred out. certkingdom certkingdom The option that says: It automatically provides a q uery language for your APIs similar to GraphQL is certkingdom certkingdom incorrect because this is not provided by API Gatew ay. certkingdom certkingdom The option that says: Provides you with static anyc ast IP addresses that serve as a fixed entry point to certkingdom your applications hosted in one or more AWS Regions is incorrect because this is a capability of AWS certkingdom Global Accelerator and not API Gateway. certkingdom certkingdom The option that says: Enables you to run applicatio ns requiring high levels of inter-node certkingdom communications at scale on AWS through its custom-b uilt operating system (OS) bypass hardware certkingdom interface is incorrect because this is a capability of Elastic Fabric Adapter and not API Gateway. certkingdom certkingdom certkingdom References: certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom https://aws.amazon.com/api-gateway/ https://aws.amazon.com/api-gateway/features/ Check out this Amazon API Gateway Cheat Sheet: https://tutorialsdojo.com/amazon-api-gateway/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": certkingdom An online events registration system is hosted in A WS and uses ECS to host its front-end tier and an R DS configured with Multi-AZ for its database tier. Wha t are the events that will make Amazon RDS certkingdom automatically perform a failover to the standby rep lica? (Select TWO.) certkingdom certkingdom", "options": [ "A. Loss of availability in primary Availability Zon e certkingdom certkingdom B. Storage failure on primary", "C. Compute unit failure on secondary DB instance", "D. Storage failure on secondary DB instance" ], "correct": "", "explanation": "Explanation certkingdom certkingdom certkingdom Amazon RDS provides high availability and failover support for DB instances using Multi-AZ certkingdom deployments. Amazon RDS uses several different tech nologies to provide failover support. Multi-AZ certkingdom certkingdom deployments for Oracle, PostgreSQL, MySQL, and Mari aDB DB instances use Amazon's failover certkingdom technology. SQL Server DB instances use SQL Server Database Mirroring (DBM). certkingdom certkingdom In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby certkingdom replica in a different Availability Zone. The prima ry DB instance is synchronously replicated across certkingdom Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimiz e certkingdom latency spikes during system backups. Running a DB instance with high availability can enhance certkingdom availability during planned system maintenance, and help protect your databases against DB instance certkingdom failure and Availability Zone disruption. certkingdom certkingdom certkingdom Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ certkingdom deployments so that you can resume database operati ons as quickly as possible without administrative certkingdom intervention. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The high-availability feature is not a scaling solu tion for read-only scenarios; you cannot use a stan dby certkingdom replica to serve read traffic. To service read-only traffic, you should use a Read Replica. certkingdom certkingdom Amazon RDS automatically performs a failover in the event of any of the following: certkingdom certkingdom Loss of availability in primary Availability Zone. certkingdom certkingdom certkingdom Loss of network connectivity to primary. certkingdom certkingdom Compute unit failure on primary. certkingdom certkingdom Storage failure on primary. certkingdom certkingdom Hence, the correct answers are: certkingdom certkingdom - Loss of availability in primary Availability Zone - Storage failure on primary certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The following options are incorrect because all the se scenarios do not affect the primary database. Automatic failover only occurs if the primary datab ase is the one that is affected. - Storage failure on secondary DB instance - In the event of Read Replica failure - Compute unit failure on secondary DB instance References: https://aws.amazon.com/rds/details/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: certkingdom https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ certkingdom certkingdom", "references": "" }, { "question": ": certkingdom certkingdom A company has multiple VPCs with IPv6 enabled for i ts suite of web applications. The Solutions Archite ct certkingdom tried to deploy a new Amazon EC2 instance but she r eceived an error saying that there is no IP address certkingdom available on the subnet. certkingdom certkingdom How should the Solutions Architect resolve this pro blem? certkingdom certkingdom", "options": [ "A. Set up a new IPv6-only subnet with a large CID R range. Associate the new subnet with the", "B. Set up a new IPv4 subnet with a larger CIDR ra nge. Associate the new subnet with the VPC", "C. Disable the IPv4 support in the VPC and use th e available IPv6 addresses.", "D. Ensure that the VPC has IPv6 CIDRs only. Remov e any IPv4 CIDRs associated with the" ], "correct": "B. Set up a new IPv4 subnet with a larger CIDR ra nge. Associate the new subnet with the VPC", "explanation": "Explanation certkingdom certkingdom certkingdom Amazon Virtual Private Cloud (VPC) is a service tha t lets you launch AWS resources in a logically certkingdom isolated virtual network that you define. You have complete control over your virtual networking certk ingdom certkingdom environment, including selection of your own IP add ress range, creation of subnets, and configuration of certkingdom route tables and network gateways. You can use both IPv4 and IPv6 for most resources in your virtual certkingdom private cloud, helping to ensure secure and easy ac cess to resources and applications. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom A subnet is a range of IP addresses in your VPC. Yo u can launch AWS resources into a specified subnet. When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. You can al so optionally assign an IPv6 CIDR block to your VPC, a nd assign IPv6 CIDR blocks to your subnets. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom If you have an existing VPC that supports IPv4 only and resources in your subnet that are configured t o use certkingdom certkingdom IPv4 only, you can enable IPv6 support for your VPC and resources. Your VPC can operate in dual-stack certkingdom mode -- your resources can communicate over IPv4, o r IPv6, or both. IPv4 and IPv6 communication are certkingdom independent of each other. You cannot disable IPv4 support for your VPC and subnets since this is the certkingdom default IP addressing system for Amazon VPC and Ama zon EC2. certkingdom certkingdom By default, a new EC2 instance uses an IPv4 address ing protocol. To fix the problem in the scenario, y ou certkingdom need to create a new IPv4 subnet and deploy the EC2 instance in the new subnet. certkingdom certkingdom Hence, the correct answer is: Set up a new IPv4 sub net with a larger CIDR range. Associate the new certkingdom certkingdom subnet with the VPC and then launch the instance. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The option that says: Set up a new IPv6-only subnet with a large CIDR range. Associate the new subnet with the VPC then launch the instance is inc orrect because you need to add IPv4 subnet first before you can create an IPv6 subnet. The option that says: Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs associated with the VPC is incorrect because you can't have a VPC with IPv6 CIDRs only. The default IP addressing system in VPC is IPv4. You can only change your VPC to dual-stack mode where your resources can communicate over IPv4, or IPv6, or both, but not ex clusively with IPv6 only. The option that says: Disable the IPv4 support in t he VPC and use the available IPv6 addresses is incorrect because you cannot disable the IPv4 suppo rt for your VPC and subnets since this is the defau lt IP addressing system. References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-migrate-ipv6.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html https://aws.amazon.com/vpc/faqs/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ certkingdom", "references": "" }, { "question": ": certkingdom certkingdom An insurance company plans to implement a message f iltering feature in their web application. To certkingdom implement this solution, they need to create separa te Amazon SQS queues for each type of quote request . certkingdom The entire message processing should not exceed 24 hours. certkingdom certkingdom certkingdom As the Solutions Architect of the company, which of the following should you do to meet the above certkingdom requirement? certkingdom certkingdom", "options": [ "A. Create multiple Amazon SNS topics and configur e the Amazon SQS queues to subscribe to", "B. Create a data stream in Amazon Kinesis Data St reams. Use the Amazon Kinesis Client", "C. Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the", "D. Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the" ], "correct": "D. Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the", "explanation": "Explanation Amazon SNS is a fully managed pub/sub messaging ser vice. With Amazon SNS, you can use topics to simultaneously distribute messages to multiple subs cribing endpoints such as Amazon SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, and mobile devices (SMS, Push). Amazon SQS is a message queue service used by distr ibuted applications to exchange messages through a polling model. It can be used to decouple sending a nd receiving components without requiring each component to be concurrently available. A fanout scenario occurs when a message published t o an SNS topic is replicated and pushed to multiple endpoints, such as Amazon SQS queues, HTTP(S) endpo ints, and Lambda functions. This allows for parallel asynchronous processing. certkingdom For example, you can develop an application that pu blishes a message to an SNS topic whenever an ordercertkingdom is placed for a product. Then, two or more SQS queu es that are subscribed to the SNS topic receive certkingdom identical notifications for the new order. An Amazo n Elastic Compute Cloud (Amazon EC2) server certkingdom instance attached to one of the SQS queues can hand le the processing or fulfillment of the order. And you certkingdom certkingdom can attach another Amazon EC2 server instance to a data warehouse for analysis of all orders received. certkingdom certkingdom By default, an Amazon SNS topic subscriber receives every message published to the topic. You can use certkingdom Amazon SNS message filtering to assign a filter pol icy to the topic subscription, and the subscriber w ill certkingdom only receive a message that they are interested in. Using Amazon SNS and Amazon SQS together, certkingdom messages can be delivered to applications that requ ire immediate notification of an event. This method is certkingdom known as fanout to Amazon SQS queues. certkingdom certkingdom Hence, the correct answer is: Create one Amazon SNS topic and configure the Amazon SQS queues to certkingdom certkingdom subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish the message to certkingdom the designated SQS queue based on its quote request type. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The option that says: Create one Amazon SNS topic a nd configure the Amazon SQS queues to subscribe to the SNS topic. Publish the same messag es to all SQS queues. Filter the messages in each queue based on the quote request type is incorrect because this option will distribute the same messag es on all SQS queues instead of its designated queue. You nee d to fan-out the messages to multiple SQS queues using a filter policy in Amazon SNS subscrip tions to allow parallel asynchronous processing. By doing so, the entire message processing will not exceed 2 4 hours. The option that says: Create multiple Amazon SNS to pics and configure the Amazon SQS queues to subscribe to the SNS topics. Publish the message to the designated SQS queue based on the quote request type is incorrect because to implement the solution asked in the scenario, you only need to us e one Amazon SNS topic. To publish it to the designated S QS queue, you must set a filter policy that allows you to fanout the messages. If you didn't set a filter pol icy in Amazon SNS, the subscribers would receive al l the messages published to the SNS topic. Thus, using mu ltiple SNS topics is not an appropriate solution fo r this scenario. The option that says: Create a data stream in Amazo n Kinesis Data Streams. Use the Amazon Kinesis Client Library to deliver all the records to the de signated SQS queues based on the quote request type is incorrect because Amazon KDS is not a messa ge filtering service. You should use Amazon SNS and SQS to distribute the topic to the designated q ueue. References: https://aws.amazon.com/getting-started/hands-on/fil ter-messages-published-to-topics/ https://docs.aws.amazon.com/sns/latest/dg/sns-messa ge-filtering.html https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-a s-subscriber.html Check out this Amazon SNS and SQS Cheat Sheets: https://tutorialsdojo.com/amazon-sns/ certkingdom https://tutorialsdojo.com/amazon-sqs/ certkingdom certkingdom Amazon SNS Overview: certkingdom certkingdom https://www.youtube.com/watch?v=ft5R45lEUJ8 certkingdom certkingdom", "references": "" }, { "question": ": certkingdom certkingdom A music publishing company is building a multitier web application that requires a key-value store whi ch certkingdom will save the document models. Each model is compos ed of band ID, album ID, song ID, composer ID, certkingdom lyrics, and other data. The web tier will be hosted in an Amazon ECS cluster with AWS Fargate launch certkingdom type. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom Which of the following is the MOST suitable setup f or the database-tier?", "options": [ "A. Launch an Amazon Aurora Serverless database.", "B. Launch an Amazon RDS database with Read Replic as.", "C. Launch a DynamoDB table.", "D. Use Amazon WorkDocs to store the document mode ls." ], "correct": "C. Launch a DynamoDB table.", "explanation": "Explanation Amazon DynamoDB is a fast and flexible NoSQL databa se service for all applications that need consisten t, single-digit millisecond latency at any scale. It i s a fully managed cloud database and supports both document and key-value store models. Its flexible d ata model, reliable performance, and automatic scal ing of throughput capacity makes it a great fit for mobile , web, gaming, ad tech, IoT, and many other applications. certkingdom certkingdom certkingdom certkingdom Hence, the correct answer is: Launch a DynamoDB tab le. certkingdom certkingdom The option that says: Launch an Amazon RDS database with Read Replicas is incorrect because this is a certkingdom certkingdom relational database. This is not suitable to be use d as a key-value store. A better option is to use D ynamoDB certkingdom as it supports both document and key-value store mo dels. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The option that says: Use Amazon WorkDocs to store the document models is incorrect because Amazon WorkDocs simply enables you to share content , provide rich feedback, and collaboratively edit documents. It is not a key-value store like DynamoD B. The option that says: Launch an Amazon Aurora Serve rless database is incorrect because this type of database is not suitable to be used as a key-value store. Amazon Aurora Serverless is an on-demand, au to- scaling configuration for Amazon Aurora where the database will automatically start-up, shut down, and scale capacity up or down based on your application's nee ds. It enables you to run your database in the clou d without managing any database instances. It's a simple, cos t-effective option for infrequent, intermittent, or unpredictable workloads and not as a key-value stor e. References: https://aws.amazon.com/dynamodb/ https://aws.amazon.com/nosql/key-value/ Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU", "references": "" }, { "question": ": An application is hosted in AWS Fargate and uses RD S database in Multi-AZ Deployments configuration with several Read Replicas. A Solutions Architect w as instructed to ensure that all of their database credentials, API keys, and other secrets are encryp ted and rotated on a regular basis to improve data security. The application should also use the lates t version of the encrypted credentials when connect ing to the RDS database. Which of the following is the MOST appropriate solu tion to secure the credentials?", "options": [ "A. Store the database credentials, API keys, and other secrets to Systems Manager Parameter", "B. Store the database credentials, API keys, and other secrets in AWS KMS.", "C. Store the database credentials, API keys, and other secrets to AWS ACM.", "D. Use AWS Secrets Manager to store and encrypt t he database credentials, API keys, and" ], "correct": "D. Use AWS Secrets Manager to store and encrypt t he database credentials, API keys, and", "explanation": "Explanation AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API ke ys, and even arbitrary text. You can store and cont rol access to these secrets centrally by using the Secr ets Manager console, the Secrets Manager command li ne interface (CLI), or the Secrets Manager API and SDK s. In the past, when you created a custom application that retrieves information from a database, you typ ically had to embed the credentials (the secret) for accessing the database directly in the application. When it came time to rotate the credentials, you had to do much more than just create new credentials. You had to invest time to update the application to use the new credentials. Then you had to distribute the updated application. If you had multiple applications that shared credentials and y ou missed updating one of them, the application wou ld break. Because of this risk, many customers have chosen no t to regularly rotate their credentials, which effe ctively substitutes one risk for another. Secrets Manager enables you to replace hardcoded cr edentials in your code (including passwords), with an API call to Secrets Manager to retrieve the secret prog rammatically. This helps ensure that the secret can 't be compromised by someone examining your code, because the secret simply isn't there. Also, you can configure Secrets Manager to automatically rotate t he secret for you according to a schedule that you specify. This enables you to replace long-term secr ets with short-term ones, which helps to significan tly certkingdom reduce the risk of compromise. certkingdom Hence, the most appropriate solution for this scena rio is: Use AWS Secrets Manager to store and certkingdom encrypt the database credentials, API keys, and oth er secrets. Enable automatic rotation for all of th e certkingdom credentials. certkingdom certkingdom The option that says: Store the database credential s, API keys, and other secrets to Systems Manager certkingdom Parameter Store each with a SecureString data type. The credentials are automatically rotated by certkingdom default is incorrect because Systems Manager Parame ter Store doesn't rotate its parameters by default. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom The option that says: Store the database credential s, API keys, and other secrets to AWS ACM is incorrect because it is just a managed private CA s ervice that helps you easily and securely manage th e lifecycle of your private certificates to allow SSL communication to your application. This is not a s uitable service to store database or any other confidential credentials. The option that says: Store the database credential s, API keys, and other secrets in AWS KMS is incorrect because this only makes it easy for you t o create and manage encryption keys and control the use of encryption across a wide range of AWS services. Thi s is primarily used for encryption and not for hosting your credentials. References: https://aws.amazon.com/secrets-manager/ https://aws.amazon.com/blogs/security/how-to-secure ly-provide-database-credentials-to-lambda-functions - by- using-aws-secrets-manager/ Check out these AWS Secrets Manager and Systems Man ager Cheat Sheets: https://tutorialsdojo.com/aws-secrets-manager/ https://tutorialsdojo.com/aws-systems-manager/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://www.youtube.com/watch?v=ogVamzF2Dzk", "references": "" }, { "question": ": An advertising company is currently working on a pr oof of concept project that automatically provides SEO analytics for its clients. Your company has a V PC in AWS that operates in a dual-stack mode in which IPv4 and IPv6 communication is allowed. You d eployed the application to an Auto Scaling group of EC2 instances with an Application Load Balancer in fron t that evenly distributes the incoming traffic. You are ready to go live but you need to point your domain name (tut orialsdojo.com) to the Application Load Balancer. In Route 53, which record types will you use to poi nt the DNS name of the Application Load Balancer? (Select TWO.) certkingdom. certkingdom.", "options": [ "A. Alias with a type \"A\" record set", "B. Non-Alias with a type \"A\" record set", "C. Alias with a type \"AAAA\" record set certkingdo m.", "D. Alias with a type \"CNAME\" record set" ], "correct": "", "explanation": "Explanation The correct answers are: Alias with a type \"AAAA\" r ecord set and Alias with a type \"A\" record set. To route domain traffic to an ELB load balancer, us e Amazon Route 53 to create an alias record that po ints to your load balancer. An alias record is a Route 53 e xtension to DNS. It's similar to a CNAME record, bu t you can create an alias record both for the root domain, su ch as tutorialsdojo.com, and for subdomains, such a s portal.tutorialsdojo.com. (You can create CNAME rec ords only for subdomains.) To enable IPv6 resolution, you would need to create a second resou rce record, tutorialsdojo.com ALIAS AAAA -> myelb.us-west-2.elb.amazonnaws.com, this is assumin g your Elastic Load Balancer has IPv6 support. Non-Alias with a type \"A\" record set is incorrect b ecause you only use Non-Alias with a type \"A\" recor d set for IP addresses. Alias with a type \"CNAME\" record set is incorrect b ecause you can't create a CNAME record at the zone apex. For example, if you register the DNS nam e tutorialsdojo.com, the zone apex is tutorialsdojo.com. Alias with a type of \"MX\" record set is incorrect b ecause an MX record is primarily used for mail serv ers. It includes a priority number and a domain name, fo r example: 10 mailserver.tutorialsdojo.com.", "references": "https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-to-elb-load-balancer.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/resource-record-sets-choosing-alias-non- alias.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" }, { "question": ": A Solutions Architect is working for an online hote l booking firm with terabytes of customer data comi ng from the websites and applications. There is an ann ual corporate meeting where the Architect needs to present the booking behavior and acquire new insigh ts from the customers' data. The Architect is looki ng for a service to perform super-fast analytics on ma ssive data sets in near real-time. Which of the following services gives the Architect the ability to store huge amounts of data and perf orm quick and flexible queries on it?", "options": [ "A. Amazon DynamoDB", "B. Amazon RDS", "C. Amazon Redshift", "D. Amazon ElastiCache" ], "correct": "C. Amazon Redshift", "explanation": "Explanation Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift delivers ten times faster performance tha n other data warehouses by using machine learning, ma ssively parallel query execution, and columnar storage on high-performance disk. You can use Redshift to analyze all your data using standard SQL and your existing Business Intelligen ce (BI) tools. It also allows you to run complex analytic q ueries against terabytes to petabytes of structured and semi- structured data, using sophisticated query optimiza tion, columnar storage on high-performance storage, and massively parallel query execution. Hence, the correct answer is: Amazon Redshift. Amazon DynamoDB is incorrect. DynamoDB is a NoSQL d atabase which is based on key-value pairs used for fast processing of small data that dynamic ally grows and changes. But if you need to scan lar ge amounts of data (ie a lot of keys all in one query) , the performance will not be optimal. Amazon ElastiCache is incorrect because this is use d to increase the performance, speed, and redundanc y with which applications can retrieve data by provid ing an in-memory database caching system, and not f or database analytical processes. Amazon RDS is incorrect because this is mainly used for On-Line Transaction Processing (OLTP) applications and not for Online Analytics Processin g (OLAP). References: https://docs.aws.amazon.com/redshift/latest/mgmt/we lcome.html https://docs.aws.amazon.com/redshift/latest/gsg/get ting-started.htm l Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/", "references": "" }, { "question": ": One of your EC2 instances is reporting an unhealthy system status check. The operations team is lookin g for an easier way to monitor and repair these insta nces instead of fixing them manually. How will you automate the monitoring and repair of the system st atus check failure in an AWS environment?", "options": [ "A. Write a python script that queries the EC2 API for each instance status check", "B. Write a shell script that periodically shuts d own and starts instances based on certain stats.", "C. and implement a third party monitoring tool.", "D. Create CloudWatch alarms that stop and start the instance based on status check alarms." ], "correct": "D. Create CloudWatch alarms that stop and start the instance based on status check alarms.", "explanation": "Explanation Using Amazon CloudWatch alarm actions, you can crea te alarms that automatically stop, terminate, reboo t, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when y ou no longer need an instance to be running. You can u se the reboot and recover actions to automatically reboot those instances or recover them onto new har dware if a system impairment occurs. Writing a python script that queries the EC2 API fo r each instance status check, writing a shell script that periodically shuts down and starts inst ances based on certain stats, and buying and implementing a third party monitoring tool are all incorrect because it is unnecessary to go through s uch lengths when CloudWatch Alarms already has such a f eature for you, offered at a low cost.", "references": "https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/UsingAlarmActions.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/" }, { "question": ": A Solutions Architect needs to set up a bastion hos t in Amazon VPC. It should only be accessed from th e corporate data center via SSH. What is the best way to achieve this?", "options": [ "A. Create a small EC2 instance with a security gr oup which only allows access on port 22 via the IP address of the corporate data center. Use a private key (.pem) file to connect to the", "B. Create a large EC2 instance with a security gr oup which only allows access on port 22 using", "C. Create a small EC2 instance with a security gr oup which only allows access on port 22 using", "D. Create a large EC2 instance with a security gr oup which only allows access on port 22 via" ], "correct": "A. Create a small EC2 instance with a security gr oup which only allows access on port 22 via the IP address of the corporate data center. Use a private key (.pem) file to connect to the", "explanation": "Explanation The best way to implement a bastion host is to crea te a small EC2 instance which should only have a security group from a particular IP address for max imum security. This will block any SSH Brute Force attacks on your bastion host. It is also recommended to use a small instance rather than a large one because this host will only act as a jump server to connect to other instances in your VPC and nothing else. Therefore, there is no point of allocating a large instance simply because it doesn't need that much computing power to process SSH (port 22) or RDP (po rt 3389) connections. It is possible to use SSH wit h an ordinary user ID and a pre-configured password as c redentials but it is more secure to use public key pairs for SSH authentication for better security. Hence, the right answer for this scenario is the op tion that says: Create a small EC2 instance with a security group which only allows access on port 22 via the IP address of the corporate data center. Use a private key (.pem) file to connect to the bas tion host. Creating a large EC2 instance with a security group which only allows access on port 22 using your own pre-configured password and creating a small EC 2 instance with a security group which only allows access on port 22 using your own pre-configu red password are incorrect. Even though you have your own pre-configured password, the SSH connection can still be accessed by anyone over the Internet, which poses as a secur ity vulnerability. The option that says: Create a large EC2 instance w ith a security group which only allows access on port 22 via the IP address of the corporate data ce nter. Use a private key (.pem) file to connect to t he bastion host is incorrect because you don't need a large in stance for a bastion host as it does not require mu ch CPU resources. References: https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html https://aws.amazon.com/blogs/security/how-to-record -ssh-sessions-established-through-a-bastion-host/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company has a cryptocurrency exchange portal that is hosted in an Auto Scaling group of EC2 instance s behind an Application Load Balancer and is deployed across multiple AWS regions. The users can be found all around the globe, but the majority are fr om Japan and Sweden. Because of the compliance requirements in these two locations, you want the J apanese users to connect to the servers in the ap- northeast-1 Asia Pacific (Tokyo) region, while the Swedish users should be connected to the servers in the eu- west-1 EU (Ireland) region. Which of the following services would allow you to easily fulfill this requirement?", "options": [ "A. Use Route 53 Weighted Routing policy.", "B. Use Route 53 Geolocation Routing policy.", "C. Set up a new CloudFront web distribution with the geo-restriction feature enabled.", "D. Set up an Application Load Balancers that will automatically route the traffic to the proper" ], "correct": "B. Use Route 53 Geolocation Routing policy.", "explanation": "Explanation Geolocation routing lets you choose the resources t hat serve your traffic based on the geographic loca tion of your users, meaning the location that DNS querie s originate from. For example, you might want all queries from Europe to be routed to an ELB load bal ancer in the Frankfurt region. When you use geolocation routing, you can localize your content and present some or all of your websit e in the language of your users. You can also use geoloc ation routing to restrict distribution of content t o only the locations in which you have distribution rights . Another possible use is for balancing load across endpoints in a predictable, easy-to-manage way, so that each user location is consistently routed to the same en dpoint. Setting up an Application Load Balancers that will automatically route the traffic to the proper AWS region is incorrect because Elastic Load Balancers distribute traffic among EC2 instances across multi ple Availability Zones but not across AWS regions. Setting up a new CloudFront web distribution with t he geo-restriction feature enabled is incorrect because the CloudFront geo-restriction feature is p rimarily used to prevent users in specific geograph ic locations from accessing content that you're distri buting through a CloudFront web distribution. It do es not let you choose the resources that serve your traffic ba sed on the geographic location of your users, unlik e the Geolocation routing policy in Route 53. Using Route 53 Weighted Routing policy is incorrect because this is not a suitable solution to meet th e requirements of this scenario. It just lets you ass ociate multiple resources with a single domain name (tutorialsdojo.com) or subdomain name (forums.tutor ialsdojo.com) and choose how much traffic is routed to each resource. You have to use a Geolocation routin g policy instead. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/geolocation-routing-policy Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/ Latency Routing vs Geoproximity Routing vs Geolocat ion Routing: https://tutorialsdojo.com/latency-routing-vs-geopro ximity-routing-vs-geolocation-routing/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": An Intelligence Agency developed a missile tracking application that is hosted on both development and production AWS accounts. The Intelligence agency's junior developer only has access to the development account. She has received security clearance to acc ess the agency's production account but the access is only temporary and only write access to EC2 and S3 is al lowed. Which of the following allows you to issue short-li ved access tokens that act as temporary security credentials to allow access to your AWS resources?", "options": [ "A. All of the given options are correct.", "B. Use AWS STS", "C. Use AWS SSO", "D. Use AWS Cognito to issue JSON Web Tokens (JWT)" ], "correct": "B. Use AWS STS", "explanation": "Explanation AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM use rs can use. In this diagram, IAM user Alice in the Dev account (the role-assuming account) needs to access the Pro d account (the role-owning account). Here's how it wo rks: Alice in the Dev account assumes an IAM role (Write Access) in the Prod account by calling AssumeRole. STS returns a set of temporary security credentials . Alice uses the temporary security credentials to ac cess services and resources in the Prod account. Al ice could, for example, make calls to Amazon S3 and Ama zon EC2, which are granted by the WriteAccess role. Using AWS Cognito to issue JSON Web Tokens (JWT) is incorrect because the Amazon Cognito service is primarily used for user authentication a nd not for providing access to your AWS resources. A JSON Web Token (JWT) is meant to be used for user authen tication and session management. Using AWS SSO is incorrect. Although the AWS SSO se rvice uses STS, it does not issue short-lived credentials by itself. AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business ap plications. The option that says All of the above is incorrect as only STS has the ability to provide temporary se curity credentials. Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _credentials_temp.html AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A digital media company shares static content to it s premium users around the world and also to their partners who syndicate their media files. The compa ny is looking for ways to reduce its server costs a nd securely deliver their data to their customers glob ally with low latency. Which combination of services should be used to pro vide the MOST suitable and cost-effective architecture? (Select TWO.)", "options": [ "A. Amazon S3", "B. AWS Global Accelerator", "C. AWS Lambda", "D. Amazon CloudFront" ], "correct": "", "explanation": "Explanation Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds, all within a deve loper- friendly environment. CloudFront is integrated with AWS both physical lo cations that are directly connected to the AWS glob al infrastructure, as well as other AWS services. Clou dFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code cl oser to customers' users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don't pay for any data transferred between thes e services and CloudFront. Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers an extremely dur able, highly available, and infinitely scalable dat a storage infrastructure at very low costs. AWS Global Accelerator and Amazon CloudFront are se parate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (su ch as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good f it for non-HTTP use cases, such as gaming (UDP), IoT (MQTT ), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both ser vices integrate with AWS Shield for DDoS protection. Hence, the correct options are Amazon CloudFront an d Amazon S3. AWS Fargate is incorrect because this service is ju st a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Ama zon Elastic Kubernetes Service (EKS). Although this service is more cost-effective than i ts server-based counterpart, Amazon S3 still costs way less than Fargate, especially for storing static content . AWS Lambda is incorrect because this simply lets yo u run your code serverless, without provisioning or managing servers. Although this is also a cost-effe ctive service since you have to pay only for the co mpute time you consume, you can't use this to store static con tent or as a Content Delivery Network (CDN). A bett er combination is Amazon CloudFront and Amazon S3. AWS Global Accelerator is incorrect because this se rvice is more suitable for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Moreover, there is no direct way that yo u can integrate AWS Global Accelerator with Amazon S3. It 's more suitable to use Amazon CloudFront instead in this scenario. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/cloudfront-serve-static-website/ https://aws.amazon.com/blogs/networking-and-content -delivery/amazon-s3-amazon-cloudfront-a-match- made-in-the-cloud/ https://aws.amazon.com/global-accelerator/faqs/", "references": "" }, { "question": ": A Solutions Architect is building a cloud infrastru cture where EC2 instances require access to various AWS services such as S3 and Redshift. The Architect wil l also need to provide access to system administrators so they can deploy and test their ch anges. Which configuration should be used to ensure that t he access to the resources is secured and not compromised? (Select TWO.)", "options": [ "A. Store the AWS Access Keys in ACM.", "B. Store the AWS Access Keys in the EC2 instance.", "C. Enable Multi-Factor Authentication.", "D. Assign an IAM role to the Amazon EC2 instance." ], "correct": "", "explanation": "Explanation In this scenario, the correct answers are: - Enable Multi-Factor Authentication - Assign an IAM role to the Amazon EC2 instance Always remember that you should associate IAM roles to EC2 instances and not an IAM user, for the purpose of accessing other AWS services. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the ap plications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make AP I requests using IAM roles. AWS Multi-Factor Authentication (MFA) is a simple b est practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password ( the first factor--what they know), as well as for a n authentication code from their AWS MFA device (the second factor--what they have). Taken together, these multiple factors provide increased security f or your AWS account settings and resources. You can enable MFA for your AWS account and for individual IAM use rs you have created under your account. MFA can also be used to control access to AWS servi ce APIs. Storing the AWS Access Keys in the EC2 instance is incorrect. This is not recommended by AWS as it can be compromised. Instead of storing access keys on an EC2 instance for use by applications that run on the instance and make AWS API requests, you can use an IAM role to provide temporary access keys for these applications. Assigning an IAM user for each Amazon EC2 Instance is incorrect because there is no need to create an IAM user for this scenario since IAM roles already provide greater flexibility and easier management. Storing the AWS Access Keys in ACM is incorrect bec ause ACM is just a service that lets you easily provision, manage, and deploy public and private SS L/TLS certificates for use with AWS services and yo ur internal connected resources. It is not used as a s ecure storage for your access keys. References: https://aws.amazon.com/iam/details/mfa/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /iam-roles-for-amazon-ec2.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company plans to migrate all of their application s to AWS. The Solutions Architect suggested to stor e all the data to EBS volumes. The Chief Technical Office r is worried that EBS volumes are not appropriate f or the existing workloads due to compliance requiremen ts, downtime scenarios, and IOPS performance. Which of the following are valid points in proving that EBS is the best service to use for migration? (Select TWO.)", "options": [ "A. EBS volumes can be attached to any EC2 Instanc e in any Availability Zone.", "B. When you create an EBS volume in an Availabili ty Zone, it is automatically replicated on a", "C. An EBS volume is off-instance storage that can persist independently from the life of an", "D. EBS volumes support live configuration changes while in production which means that you" ], "correct": "", "explanation": "Explanation An Amazon EBS volume is a durable, block-level stor age device that you can attach to a single EC2 instance. You can use EBS volumes as primary storag e for data that requires frequent updates, such as the system drive for an instance or storage for a datab ase application. You can also use them for throughp ut- intensive applications that perform continuous disk scans. EBS volumes persist independently from the running life of an EC2 instance. Here is a list of important information about EBS V olumes: - When you create an EBS volume in an Availability Zone, it is automatically replicated within that zo ne to prevent data loss due to a failure of any single ha rdware component. - An EBS volume can only be attached to one EC2 ins tance at a time. - After you create a volume, you can attach it to a ny EC2 instance in the same Availability Zone - An EBS volume is off-instance storage that can pe rsist independently from the life of an instance. Y ou can specify not to terminate the EBS volume when yo u terminate the EC2 instance during instance creati on. - EBS volumes support live configuration changes wh ile in production which means that you can modify the volume type, volume size, and IOPS capacity wit hout service interruptions. - Amazon EBS encryption uses 256-bit Advanced Encry ption Standard algorithms (AES-256) - EBS Volumes offer 99.999% SLA. The option that says: When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component is incorrect because when you create an E BS volume in an Availability Zone, it is automatically replicated within that zone only, and not on a separate AWS region, to prevent data loss due to a failure of any single hardware component. The option that says: EBS volumes can be attached t o any EC2 Instance in any Availability Zone is incorrect as EBS volumes can only be attached to an EC2 instance in the same Availability Zone. The option that says: Amazon EBS provides the abili ty to create snapshots (backups) of any EBS volume and write a copy of the data in the volume t o Amazon RDS, where it is stored redundantly in multiple Availability Zones is almost correct. But instead of storing the volume to Amazon RDS, the EBS Volume snapshots are actually sent to Amazon S3. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSVolumes.html https://aws.amazon.com/ebs/features/ Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Here is a short video tutorial on EBS: https://youtu.be/ljYH5lHQdxo", "references": "" }, { "question": ": A company needs to assess and audit all the configu rations in their AWS account. It must enforce stric t compliance by tracking all configuration changes ma de to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified aut omatically to avoid data breaches. Which of the following options will meet this requi rement?", "options": [ "A. Use AWS CloudTrail and review the event history of your AWS account. B. Use AWS Trusted Advisor to analyze your AWS envi ronment.", "C. Use AWS IAM to generate a credential report.", "D. Use AWS Config to set up a rule in your AWS ac count.", "A. SQS", "B. SWF", "C. SES", "D. Lambda function" ], "correct": "D. Use AWS Config to set up a rule in your AWS ac count.", "explanation": "Explanation The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must firs t add a notification configuration identifying the events you want Amazon S3 to publish, and the destinations whe re you want Amazon S3 to send the event notifications. Amazon S3 supports the following destinations where it can publish events: Amazon Simple Notification Service (Amazon SNS) topic - A web service that coo rdinates and manages the delivery or sending of messages to subscribing endpoints or clients. Amazon Simple Queue Service (Amazon SQS) queue - Of fers reliable and scalable hosted queues for storing messages as they travel between computer. AWS Lambda - AWS Lambda is a compute service where you can upload your code and the service can run the code on your behalf using the AWS infrastru cture. You package up and upload your custom code t o AWS Lambda when you create a Lambda function Kinesis is incorrect because this is used to collec t, process, and analyze real-time, streaming data s o you can get timely insights and react quickly to new inform ation, and not used for event notifications. You ha ve to use SNS, SQS or Lambda. SES is incorrect because this is mainly used for se nding emails designed to help digital marketers and application developers send marketing, notification , and transactional emails, and not for sending eve nt notifications from S3. You have to use SNS, SQS or Lambda. SWF is incorrect because this is mainly used to bui ld applications that use Amazon's cloud to coordina te work across distributed components and not used as a way to trigger event notifications from S3. You have t o use SNS, SQS or Lambda. Here's what you need to do in order to start using this new feature with your application: Create the queue, topic, or Lambda function (which I'll call the target for brevity) if necessary. Grant S3 permission to publish to the target or inv oke the Lambda function. For SNS or SQS, you do thi s by applying an appropriate policy to the topic or the queue. For Lambda, you must create and supply an IA M role, then associate it with the Lambda function. Arrange for your application to be invoked in respo nse to activity on the target. As you will see in a moment, you have several options here. Set the bucket's Notification Configuration to poin t to the target. Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company is building an internal application that serves as a repository for images uploaded by a cou ple of users. Whenever a user uploads an image, it would b e sent to Kinesis Data Streams for processing befor e it is stored in an S3 bucket. If the upload was successfu l, the application will return a prompt informing t he user that the operation was successful. The entire processing typically takes about 5 minutes to finis h. Which of the following options will allow you to as ynchronously process the request to the application from upload request to Kinesis, S3, and return a re ply in the most cost-effective manner?", "options": [ "A. Replace the Kinesis Data Streams with an Amazo n SQS queue. Create a Lambda function", "B. Use a combination of SQS to queue the requests and then asynchronously process them", "C. Use a combination of Lambda and Step Functions to orchestrate service components and", "D. Use a combination of SNS to buffer the request s and then asynchronously process them" ], "correct": "A. Replace the Kinesis Data Streams with an Amazo n SQS queue. Create a Lambda function", "explanation": "Explanation AWS Lambda supports the synchronous and asynchronou s invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function. When you use an AWS service as a trigger, the invocation type is predetermined for e ach service. You have no control over the invocatio n type that these event sources use when they invoke your Lambda function. Since processing only takes 5 minutes, Lambda is also a cost-effective choice. You can use an AWS Lambda function to process messa ges in an Amazon Simple Queue Service (Amazon SQS) queue. Lambda event source mappings support st andard queues and first-in, first-out (FIFO) queues . With Amazon SQS, you can offload tasks from one com ponent of your application by sending them to a queue and processing them asynchronously. Kinesis Data Streams is a real-time data streaming service that requires the provisioning of shards. A mazon SQS is a cheaper option because you only pay for wh at you use. Since there is no requirement for real- time processing in the scenario given, replacing Kinesis Data Streams with Amazon SQS would save more costs . Hence, the correct answer is: Replace the Kinesis s tream with an Amazon SQS queue. Create a Lambda function that will asynchronously process th e requests. Using a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests is incorrect. T he AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Although thi s can be a valid solution, it is not cost-effective since the application does not have a lot of components to orchestrate. Lambda functions can effectively meet the requirements in this scenario without using Ste p Functions. This service is not as cost-effective as Lambda. Using a combination of SQS to queue the requests an d then asynchronously processing them using On-Demand EC2 Instances and Using a combination of SNS to buffer the requests and then asynchronously processing them using On-Demand EC2 Instances are both incorrect as using On- Demand EC2 instances is not cost-effective. It is b etter to use a Lambda function instead. References: https://docs.aws.amazon.com/lambda/latest/dg/welcom e.html https://docs.aws.amazon.com/lambda/latest/dg/lambda -invocation.html https://aws.amazon.com/blogs/compute/new-aws-lambda -controls-for-stream-processing-and- asynchronous-invocations/ AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A media company hosts large volumes of archive data that are about 250 TB in size on their internal servers. They have decided to move these data to S3 because of its durability and redundancy. The company currently has a 100 Mbps dedicated line con necting their head office to the Internet. Which of the following is the FASTEST and the MOST cost-effective way to import all these data to Amazon S3?", "options": [ "A. Upload it directly to S3", "B. Use AWS Snowmobile to transfer the data over t o S3.", "C. Establish an AWS Direct Connect connection the n transfer the data over to S3.", "D. Order multiple AWS Snowball devices to upload the files to Amazon S3." ], "correct": "D. Order multiple AWS Snowball devices to upload the files to Amazon S3.", "explanation": "Explanation AWS Snowball is a petabyte-scale data transport sol ution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Usin g Snowball addresses common challenges with large- scale data transfers including high network costs, long transfer times, and security concerns. Transfe rring data with Snowball is simple, fast, secure, and can be a s little as one-fifth the cost of high-speed Intern et. Snowball is a strong choice for data transfer if yo u need to more securely and quickly transfer teraby tes to many petabytes of data to AWS. Snowball can also be the right choice if you don't want to make expensi ve upgrades to your network infrastructure, if you fre quently experience large backlogs of data, if you'r e located in a physically isolated environment, or if you're in an area where high-speed Internet connections are n ot available or cost-prohibitive. As a rule of thumb, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consi der using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transfer ring your data and need to transfer 100 TB of data, it takes more than 100 days to complete data transfer over t hat connection. You can make the same transfer by u sing multiple Snowballs in about a week. Hence, ordering multiple AWS Snowball devices to up load the files to Amazon S3 is the correct answer. Uploading it directly to S3 is incorrect since this would take too long to finish due to the slow Inte rnet connection of the company. Establishing an AWS Direct Connect connection then transferring the data over to S3 is incorrect since provisioning a line for Direct Connect would take t oo much time and might not give you the fastest dat a transfer solution. In addition, the scenario didn't warrant an establishment of a dedicated connection from you r on- premises data center to AWS. The primary goal is to just do a one-time migration of data to AWS which can be accomplished by using AWS Snowball devices. Using AWS Snowmobile to transfer the data over to S 3 is incorrect because Snowmobile is more suitable if you need to move extremely large amounts of data to AWS or need to transfer up to 100PB of data. Th is will be transported on a 45-foot long ruggedized shippin g container, pulled by a semi-trailer truck. Take n ote that you only need to migrate 250 TB of data, hence, thi s is not the most suitable and cost-effective solut ion. References: https://aws.amazon.com/snowball/ https://aws.amazon.com/snowball/faqs/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": A company is working with a government agency to im prove traffic planning and maintenance of roadways to prevent accidents. The proposed solution is to m anage the traffic infrastructure in real-time, aler t traffic engineers and emergency response teams when problem s are detected, and automatically change traffic signals to get emergency personnel to accident scen es faster by using sensors and smart devices. Which AWS service will allow the developers of the agency to connect the smart devices to the cloud- based applications?", "options": [ "A. AWS Elastic Beanstalk", "B. AWS CloudFormation", "C. Amazon Elastic Container Service", "D. AWS IoT Core" ], "correct": "D. AWS IoT Core", "explanation": "Explanation AWS IoT Core is a managed cloud service that lets c onnected devices easily and securely interact with cloud applications and other devices. AWS IoT Core provides secure communication and data processing across different kinds of connected devices and loc ations so you can easily build IoT applications. AWS IoT Core allows you to connect multiple devices to the cloud and to other devices without requirin g you to deploy or manage any servers. You can also filter, transform, and act upon device data on the fly base d on the rules you define. With AWS IoT Core, your applicati ons can keep track of and communicate with all of your devices, all the time, even when t hey aren't connected. Hence, the correct answer is: AWS IoT Core. AWS CloudFormation is incorrect because this is mai nly used for creating and managing the architecture and not for handling connected devices. You have to use AWS IoT Core instead. AWS Elastic Beanstalk is incorrect because this is just an easy-to-use service for deploying and scali ng web applications and services developed with Java, .NET , PHP, Node.js, Python, and other programming languages. Elastic Beanstalk can't be used to conne ct smart devices to cloud-based applications. Amazon Elastic Container Service is incorrect becau se this is mainly used for creating and managing docker instances and not for handling devices. References: https://aws.amazon.com/iot-core/ https://aws.amazon.com/iot/", "references": "" }, { "question": ": A commercial bank has a forex trading application. They created an Auto Scaling group of EC2 instances that allow the bank to cope with the current traffi c and achieve cost-efficiency. They want the Auto S caling group to behave in such a way that it will follow a predefined set of parameters before it scales down the number of EC2 instances, which protects the system from unintended slowdown or unavailability. Which of the following statements are true regardin g the cooldown period? (Select TWO.)", "options": [ "A. Its default value is 300 seconds.", "B. It ensures that the Auto Scaling group does no t launch or terminate additional EC2", "C. It ensures that the Auto Scaling group launche s or terminates additional EC2 instances", "D. Its default value is 600 seconds." ], "correct": "", "explanation": "Explanation In Auto Scaling, the following statements are corre ct regarding the cooldown period: It ensures that the Auto Scaling group does not lau nch or terminate additional EC2 instances before th e previous scaling activity takes effect. Its default value is 300 seconds. It is a configurable setting for your Auto Scaling group. The following options are incorrect: - It ensures that before the Auto Scaling group sca les out, the EC2 instances have ample time to cooldown. - It ensures that the Auto Scaling group launches o r terminates additional EC2 instances without any downtime. - Its default value is 600 seconds. These statements are inaccurate and don't depict wh at the word \"cooldown\" actually means for Auto Scaling. The cooldown period is a configurable sett ing for your Auto Scaling group that helps to ensur e that it doesn't launch or terminate additional instances be fore the previous scaling activity takes effect. Af ter the Auto Scaling group dynamically scales using a simple sca ling policy, it waits for the cooldown period to co mplete before resuming scaling activities. The figure below demonstrates the scaling cooldown:Reference: http://docs.aws.amazon.com/autoscaling/latest/userg uide/as-instance-termination.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": An organization needs to control the access for sev eral S3 buckets. They plan to use a gateway endpoin t to allow access to trusted buckets. Which of the following could help you achieve this requirement?", "options": [ "A. Generate an endpoint policy for trusted S3 buc kets.", "B. Generate a bucket policy for trusted VPCs.", "C. Generate an endpoint policy for trusted VPCs.", "D. Generate a bucket policy for trusted S3 bucket s." ], "correct": "A. Generate an endpoint policy for trusted S3 buc kets.", "explanation": "Explanation A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink withou t requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. When you create a VPC endpoint, you can attach an e ndpoint policy that controls access to the service to which you are connecting. You can modify the endpoi nt policy attached to your endpoint and add or remove the route tables used by the endpoint. An en dpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket pol icies). It is a separate policy for controlling acc ess from the endpoint to the specified service. We can use a bucket policy or an endpoint policy to allow the traffic to trusted S3 buckets. The optio ns that have 'trusted S3 buckets' key phrases will be the p ossible answer in this scenario. It would take you a lot of time to configure a bucket policy for each S3 bucket ins tead of using a single endpoint policy. Therefore, you should use an endpoint policy to control the traffic to th e trusted Amazon S3 buckets. Hence, the correct answer is: Generate an endpoint policy for trusted S3 buckets. The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3 bucket. This can simp ly be accomplished by creating an S3 endpoint policy. The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are generatin g a policy for trusted VPCs. Remember that the scenario only requires you to allow the traffic for trusted S3 buckets, and not to the VPCs. The option that says: Generate an endpoint policy f or trusted VPCs is incorrect because it only allows access to trusted VPCs, and not to trusted Amazon S3 buckets References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-endpoints-s3.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/connect-s3-vpc-endpoint/ Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": "A company has an enterprise web application hosted on Amazon ECS Docker containers that use an Amazon FSx for Lustre filesystem for its high-perfo rmance computing workloads. A warm standby environment is running in another AWS region for di saster recovery. A Solutions Architect was assigned to design a system that will automatically route the l ive traffic to the disaster recovery (DR) environme nt only in the event that the primary application stack exp eriences an outage. What should the Architect do to satisfy this requir ement?", "options": [ "A. Set up a CloudWatch Events rule to monitor the primary Route 53 DNS endpoint and", "B. Set up a Weighted routing policy configuration in Route 53 by adding health checks on both", "C. Set up a failover routing policy configuration in Route 53 by adding a health check on the", "D. Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a" ], "correct": "C. Set up a failover routing policy configuration in Route 53 by adding a health check on the", "explanation": "Explanation Use an active-passive failover configuration when y ou want a primary resource or group of resources to be available majority of the time and you want a secon dary resource or group of resources to be on standb y in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the healthy primary resources. If all the primary resou rces are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS quer ies. To create an active-passive failover configuration with one primary record and one secondary record, y ou just create the records and specify Failover for the rou ting policy. When the primary resource is healthy, Route 53 responds to DNS queries using the primary record. W hen the primary resource is unhealthy, Route 53 responds to DNS queries using the secondar y record. You can configure a health check that monitors an e ndpoint that you specify either by IP address or by domain name. At regular intervals that you specify, Route 53 submits automated requests over the Internet to your application, server, or other resource to verify th at it's reachable, available, and functional. Optio nally, you can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL. When Route 53 checks the health of an endpoint, it sends an HTTP, HTTPS, or TCP request to the IP address and port that you specified when you create d the health check. For a health check to succeed, your router and firewall rules must allow inbound traffi c from the IP addresses that the Route 53 health ch eckers use. Hence, the correct answer is: Set up a failover rou ting policy configuration in Route 53 by adding a health check on the primary service endpoint. Confi gure Route 53 to direct the DNS queries to the secondary record when the primary resource is unhea lthy. Configure the network access control list and the route table to allow Route 53 to send reque sts to the endpoints specified in the health checks . Enable the Evaluate Target Health option by setting it to Yes. The option that says: Set up a Weighted routing pol icy configuration in Route 53 by adding health checks on both the primary stack and the DR environ ment. Configure the network access control list and the route table to allow Route 53 to send reque sts to the endpoints specified in the health checks . Enable the Evaluate Target Health option by setting it to Yes is incorrect because Weighted routing simply lets you associate multiple resources with a single domain name (tutorialsdojo.com) or subdomai n name (blog.tutorialsdojo.com) and choose how much traffi c is routed to each resource. This can be useful fo r a variety of purposes, including load balancing and t esting new versions of software, but not for a fail over configuration. Remember that the scenario says that the solution should automatically route the live t raffic to the disaster recovery (DR) environment only in the event that the primary application stack experi ences an outage. This configuration is incorrectly distributing the traffic on both the primary and DR environment. The option that says: Set up a CloudWatch Alarm to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execute the Change ResourceRecordSets API call using the function to initiate the failover to the secondary DNS record is incorrect because setting up a CloudWatch Alarm and using the Route 53 API is not applicable nor useful at all in this scenario. Remember that CloudWatch Alam is primarily used for monitoring CloudWatch metrics. You have to use a Failover routing policy instead. The option that says: Set up a CloudWatch Events ru le to monitor the primary Route 53 DNS endpoint and create a custom Lambda function. Execu te theChangeResourceRecordSets API call using the function to initiate the failover to the secondary DNS record is incorrect because the Amazo n CloudWatch Events service is commonly used to deliv er a near real-time stream of system events that describe changes in some Amazon Web Services (AWS) resources. There is no direct way for CloudWatch Events to monitor the status of your Route 53 endpo ints. You have to configure a health check and a failover configuration in Route 53 instead to satis fy the requirement in this scenario. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/health-checks-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-router-firewall-rules.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ": A Solutions Architect is working for a company that uses Chef Configuration management in their data center. She needs to leverage their existing Chef r ecipes in AWS. Which of the following services should she use?", "options": [ "A. AWS CloudFormation B. AWS OpsWorks", "C. Amazon Simple Workflow Service", "D. AWS Elastic Beanstalk" ], "correct": "", "explanation": "Explanation AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms th at allow you to use code to automate the configurations of your servers. OpsWorks lets you u se Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazo n EC2 instances or on-premises compute environments. OpsWorks has three offerings - AWS Op sworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks. Amazon Simple Workflow Service is incorrect because AWS SWF is a fully-managed state tracker and task coordinator in the Cloud. It does not let you leverage Chef recipes. AWS Elastic Beanstalk is incorrect because this han dles an application's deployment details of capacit y provisioning, load balancing, auto-scaling, and app lication health monitoring. It does not let you lev erage Chef recipes just like Amazon SWF. AWS CloudFormation is incorrect because this is a s ervice that lets you create a collection of related AWS resources and provision them in a predictable fashi on using infrastructure as code. It does not let yo u leverage Chef recipes just like Amazon SWF and AWS Elastic B eanstalk.", "references": "https://aws.amazon.com/opsworks/ Check out this AWS OpsWorks Cheat Sheet: https://tutorialsdojo.com/aws-opsworks/ Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" }, { "question": ": An organization is currently using a tape backup so lution to store its application data on-premises. T hey plan to use a cloud storage service to preserve the backup data for up to 10 years that may be accessed about once or twice a year. Which of the following is the most cost-effective o ption to implement this solution?", "options": [ "A. Use AWS Storage Gateway to backup the data dir ectly to Amazon S3 Glacier Deep Archive.", "B. Order an AWS Snowball Edge appliance to import the backup directly to Amazon S3", "C. Use AWS Storage Gateway to backup the data dir ectly to Amazon S3 Glacier.", "D. Use Amazon S3 to store the backup data and add a lifecycle rule to transition the current" ], "correct": "A. Use AWS Storage Gateway to backup the data dir ectly to Amazon S3 Glacier Deep Archive.", "explanation": "Explanation Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway su pports all leading backup applications and caches virtual tapes on-premises for low-latency data acce ss. Tape Gateway encrypts data between the gateway and AWS for secure data transfer and compresses dat a and transitions virtual tapes between Amazon S3 and Amazon S3 Glacier, or Amazon S3 Glacier Deep Ar chive, to minimize storage costs. The scenario requires you to backup your applicatio n data to a cloud storage service for long-term ret ention of data that will be retained for 10 years. Since i t uses a tape backup solution, an option that uses AWS Storage Gateway must be the possible answer. Tape G ateway can move your virtual tapes archived in Amazon S3 Glacier or Amazon S3 Glacier Deep Archive storage class, enabling you to further reduce the monthly cost to store long-term data in the cloud b y up to 75%. Hence, the correct answer is: Use AWS Storage Gatew ay to backup the data directly to Amazon S3 Glacier Deep Archive. The option that says: Use AWS Storage Gateway to ba ckup the data directly to Amazon S3 Glacier is incorrect. Although this is a valid solution, movin g to S3 Glacier is more expensive than directly bac king it up to Glacier Deep Archive. The option that says: Order an AWS Snowball Edge ap pliance to import the backup directly to Amazon S3 Glacier is incorrect because Snowball Edg e can't directly integrate backups to S3 Glacier. Moreover, you have to use the Amazon S3 Glacier Dee p Archive storage class as it is more cost-effectiv e than the regular Glacier class. The option that says: Use Amazon S3 to store the ba ckup data and add a lifecycle rule to transition th e current version to Amazon S3 Glacier is incorrect. Although this is a possible solution, it is difficu lt to directly integrate a tape backup solution to S3 wit hout using Storage Gateway. References: https://aws.amazon.com/storagegateway/faqs/ https://aws.amazon.com/s3/storage-classes/ AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", "references": "" }, { "question": ": Both historical records and frequently accessed dat a are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage's capacity is nearing its lim it, the company's Solutions Architect has decided to move t he historical records to AWS to free up space for t he active data. Which of the following architectures deliver the be st solution in terms of cost and operational manage ment?", "options": [ "A. Use AWS Storage Gateway to move the historical records from on-premises to AWS.", "B. Use AWS Storage Gateway to move the historical records from on-premises to AWS.", "C. Use AWS DataSync to move the historical record s from on-premises to AWS. Choose", "D. Use AWS DataSync to move the historical record s from on-premises to AWS. Choose" ], "correct": "D. Use AWS DataSync to move the historical record s from on-premises to AWS. Choose", "explanation": "Explanation AWS DataSync makes it simple and fast to move large amounts of data online between on-premises storage and Amazon S3, Amazon Elastic File System ( Amazon EFS), or Amazon FSx for Windows File Server. Manual tasks related to data transfers can slow down migrations and burden IT operations. DataSync eliminates or automatically handles many o f these tasks, including scripting copy jobs, scheduling, and monitoring transfers, validating da ta, and optimizing network utilization. The DataSyn c software agent connects to your Network File System (NFS), S erver Message Block (SMB) storage, and your self-managed object storage, so you don't have to modify your applications. DataSync can transfer hundreds of terabytes and mil lions of files at speeds up to 10 times faster than open- source tools, over the Internet or AWS Direct Conne ct links. You can use DataSync to migrate active da ta sets or archives to AWS, transfer data to the cloud for timely analysis and processing, or replicate data t o AWS for business continuity. Getting started with DataSync is easy: deploy the DataSync agent, connect it to y our file system, select your AWS storage resources, and star t moving data between them. You pay only for the da ta you move. Since the problem is mainly about moving historical records from on-premises to AWS, using AWS DataSync is a more suitable solution. You can use D ataSync to move cold data from expensive on-premise s storage systems directly to durable and secure long -term storage, such as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive. Hence, the correct answer is the option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier D eep Archive to be the destination for the data. The following options are both incorrect: - Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destinatio n for the data. - Use AWS Storage Gateway to move the historical re cords from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the dat a. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacie r Deep Archive after 30 days. Although you can copy data from on-premises to AWS with Storage Gateway, it is not suitable for transferring large sets of data to AWS. Storage Gat eway is mainly used in providing low-latency access to data by caching frequently accessed data on-premises whi le storing archive data securely and durably in Ama zon cloud storage services. Storage Gateway optimizes d ata transfer to AWS by sending only changed data and compressing data. The option that says: Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S 3 Glacier Deep Archive after 30 days is incorrect because, with AWS DataSync, you can transfer data f rom on-premises directly to Amazon S3 Glacier Deep Archive. You don't have to configure the S3 lifecyc le policy and wait for 30 days to move the data to Glacier Deep Archive. References: https://aws.amazon.com/datasync/faqs/ https://aws.amazon.com/storagegateway/faqs/ Check out these AWS DataSync and Storage Gateway Ch eat Sheets: https://tutorialsdojo.com/aws-datasync/ https://tutorialsdojo.com/aws-storage-gateway/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs", "references": "" }, { "question": ": A company is running a multi-tier web application f arm in a virtual private cloud (VPC) that is not connected to their corporate network. They are conn ecting to the VPC over the Internet to manage the f leet of Amazon EC2 instances running in both the public and private subnets. The Solutions Architect has added a bastion host with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further l imit administrative access to all of the instances in the VPC. Which of the following bastion host deployment opti ons will meet this requirement?", "options": [ "A. Deploy a Windows Bastion host on the corporate network that has RDP access to all EC2", "B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow", "C. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow", "D. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and" ], "correct": "C. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow", "explanation": "Explanation The correct answer is to deploy a Windows Bastion h ost with an Elastic IP address in the public subnet and allow RDP access to bastion only from the corporate IP addresses. A bastion host is a special purpose computer on a n etwork specifically designed and configured to withstand attacks. If you have a bastion host in AW S, it is basically just an EC2 instance. It should be in a public subnet with either a public or Elastic IP address w ith sufficient RDP or SSH access defined in the sec urity group. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. Deploying a Windows Bastion host on the corporate n etwork that has RDP access to all EC2 instances in the VPC is incorrect since you do not deploy the Bastion host to your corporate network. It should be in the public subnet of a VPC. Deploying a Windows Bastion host with an Elastic IP address in the private subnet, and restricting RDP access to the bastion from only the corporate p ublic IP addresses is incorrect since it should be deployed in a public subnet, not a private subnet. Deploying a Windows Bastion host with an Elastic IP address in the public subnet and allowing SSH access to the bastion from anywhere is incorrect. S ince it is a Windows bastion, you should allow RDP access and not SSH as this is mainly used for Linux -based systems.", "references": "https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" }, { "question": ": A company is building a transcription service in wh ich a fleet of EC2 worker instances processes an uploaded audio file and generates a text file as an output. They must store both of these frequently a ccessed files in the same durable storage until the text fi le is retrieved by the uploader. Due to an expected surge in demand, they have to ensure that the storage is sca lable and can be retrieved within minutes. Which storage option in AWS can they use in this si tuation, which is both cost-efficient and scalable?", "options": [ "A. A single Amazon S3 bucket", "B. Amazon S3 Glacier Deep Archive", "C. Multiple Amazon EBS volume with snapshots", "D. Multiple instance stores" ], "correct": "A. A single Amazon S3 bucket", "explanation": "Explanation Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. It provides easy-to-use management features so you can organize your data and configure finely-tun ed access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applica tions for companies all around the world. In this scenario, the requirement is to have cost-e fficient and scalable storage. Among the given opti ons, the best option is to use Amazon S3. It's a simple stor age service that offers a highly-scalable, reliable , and low-latency data storage infrastructure at very low costs. Hence, the correct answer is: A single Amazon S3 bu cket. The option that says: Multiple Amazon EBS volume wi th snapshots is incorrect because Amazon S3 is more cost-efficient than EBS volumes. The option that says: Multiple instance stores is i ncorrect. Just like the option above, you must use Amazon S3 since it is scalable and cost-efficient t han instance store volumes. The option that says: Amazon S3 Glacier Deep Archiv e is incorrect because this is mainly used for data archives with data retrieval times that can take mo re than 12 hours. Hence, it is not suitable for the transcription service where the data are stored and frequently accessed. References: https://aws.amazon.com/s3/pricing/ https://docs.aws.amazon.com/AmazonS3/latest/gsg/Get StartedWithS3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company has a static corporate website hosted in a standard S3 bucket and a new web domain name that was registered using Route 53. You are instructed b y your manager to integrate these two services in o rder to successfully launch their corporate website. What are the prerequisites when routing traffic usi ng Amazon Route 53 to a website that is hosted in a n Amazon S3 Bucket? (Select TWO.)", "options": [ "A. The S3 bucket must be in the same region as th e hosted zone", "B. The S3 bucket name must be the same as the dom ain name", "C. A registered domain name", "D. The record set must be of type \"MX\"" ], "correct": "", "explanation": "Explanation Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket: - An S3 bucket that is configured to host a static website. The bucket must have the same name as your domain or subdomain. For example, if you want to us e the subdomain portal.tutorialsdojo.com, the name of the bucket must be portal.tutorialsdojo.com. - A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar. - Route 53 as the DNS service for the domain. If yo u register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain. The option that says: The record set must be of typ e \"MX\" is incorrect since an MX record specifies th e mail server responsible for accepting email messages on behalf of a domain name. This is not what is being asked by the question. The option that says: The S3 bucket must be in the same region as the hosted zone is incorrect. There is no constraint that the S3 bucket must be in the same r egion as the hosted zone in order for the Route 53 service to route traffic into it. The option that says: The Cross-Origin Resource Sha ring (CORS) option should be enabled in the S3 bucket is incorrect because you only need to enable Cross-Origin Resource Sharing (CORS) when your client web application on one domain interacts with the resources in a different domain.", "references": "https://docs.aws.amazon.com/Route53/latest/Develope rGuide/RoutingToS3Bucket.html Amazon Route 53 Overview: https://www.youtube.com/watch?v=Su308t19ubY Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" }, { "question": ": A company plans to conduct a network security audit . The web application is hosted on an Auto Scaling group of EC2 Instances with an Application Load Bal ancer in front to evenly distribute the incoming tr affic. A Solutions Architect has been tasked to enhance the security posture of the company's cloud infrastruct ure and minimize the impact of DDoS attacks on its resource s. Which of the following is the most effective soluti on that should be implemented?", "options": [ "A. Configure Amazon CloudFront distribution and s et a Network Load Balancer as the origin.", "B. Configure Amazon CloudFront distribution and s et a Network Load Balancer as the origin.", "C. Configure Amazon CloudFront distribution and s et an Application Load Balancer as the", "D. Configure Amazon CloudFront distribution and s et Application Load Balancer as the" ], "correct": "D. Configure Amazon CloudFront distribution and s et Application Load Balancer as the", "explanation": "Explanation AWS WAF is a web application firewall that helps pr otect your web applications or APIs against common web exploits that may affect availability, compromi se security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security ru les that block common attack patterns, such as SQL inje ction or cross-site scripting, and rules that filte r out specific traffic patterns you define. You can deplo y AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fr onts your web servers or origin servers running on EC2, or Amazon API Gateway for your APIs. To detect and mitigate DDoS attacks, you can use AW S WAF in addition to AWS Shield. AWS WAF is a web application firewall that helps detect and miti gate web application layer DDoS attacks by inspecti ng traffic inline. Application layer DDoS attacks use well-formed but malicious requests to evade mitigat ion and consume application resources. You can define c ustom security rules that contain a set of conditio ns, rules, and actions to block attacking traffic. Afte r you define web ACLs, you can apply them to CloudF ront distributions, and web ACLs are evaluated in the pr iority order you specified when you configured them . By using AWS WAF, you can configure web access cont rol lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filt er and block requests based on request signatures. Each Web ACL consists of rules that you can configure to str ing match or regex match one or more request attributes, such as the URI, query-string, HTTP met hod, or header key. In addition, by using AWS WAF's rate- based rules, you can automatically block the IP add resses of bad actors when requests matching a rule exceed a threshold that you define. Requests from offendin g client IP addresses will receive 403 Forbidden er ror responses and will remain blocked until request rat es drop below the threshold. This is useful for mit igating HTTP flood attacks that are disguised as regular we b traffic. It is recommended that you add web ACLs with rate-b ased rules as part of your AWS Shield Advanced protection. These rules can alert you to sudden spi kes in traffic that might indicate a potential DDoS event. A rate-based rule counts the requests that arrive fro m any individual address in any five-minute period. If the number of requests exceeds the limit that you defin e, the rule can trigger an action such as sending y ou a notification. Hence, the correct answer is: Configure Amazon Clou dFront distribution and set Application Load Balancer as the origin. Create a rate-based web ACL rule using AWS WAF and associate it with Amazon CloudFront. The option that says: Configure Amazon CloudFront d istribution and set a Network Load Balancer as the origin. Use VPC Flow Logs to monitor abnormal t raffic patterns. Set up a custom AWS Lambda function that processes the flow logs and invokes A mazon SNS for notification is incorrect because thi s option only allows you to monitor the traffic that is reac hing your instance. You can't use VPC Flow Logs to mitigate DDoS attacks. The option that says: Configure Amazon CloudFront d istribution and set an Application Load Balancer as the origin. Create a security group rul e and deny all the suspicious addresses. Use Amazon SNS for notification is incorrect. To deny s uspicious addresses, you must manually insert the I P addresses of these hosts. This is a manual task whi ch is not a sustainable solution. Take note that at tackers generate large volumes of packets or requests to ov erwhelm the target system. Using a security group i n this scenario won't help you mitigate DDoS attacks. The option that says: Configure Amazon CloudFront d istribution and set a Network Load Balancer as the origin. Use Amazon GuardDuty to block suspiciou s hosts based on its security findings. Set up a custom AWS Lambda function that processes the secur ity logs and invokes Amazon SNS for notification is incorrect because Amazon GuardDuty is just a threat detection service. You should use AWS WAF and create your own AWS WAF rate-based rule s for mitigating HTTP flood attacks that are disguised as regular web traffic. References: https://docs.aws.amazon.com/waf/latest/developergui de/ddos-overview.html https://docs.aws.amazon.com/waf/latest/developergui de/ddos-get-started-rate-based-rules.html https://d0.awsstatic.com/whitepapers/Security/DDoS_ White_Paper.pdf Check out this AWS WAF Cheat Sheet: https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", "references": "" }, { "question": ": A company runs a messaging application in the ap-no rtheast-1 and ap-southeast-2 region. A Solutions Architect needs to create a routing policy wherein a larger portion of traffic from the Philippines an d North India will be routed to the resource in the ap-northeast- 1 region. Which Route 53 routing policy should the Solutions Architect use?", "options": [ "A. Weighted Routing", "B. Geoproximity Routing", "C. Latency Routing", "D. Geolocation Routing" ], "correct": "B. Geoproximity Routing", "explanation": "Explanation Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform three main functions in any combination: domain registration, DNS routing, and health checking. After you create a hosted zone for your d omain, such as example.com, you create records to t ell the Domain Name System (DNS) how you want traffic to be routed for that domain. For example, you might create records that cause DN S to do the following: Route Internet traffic for example.com to the IP ad dress of a host in your data center. Route email for that domain (jose.rizal@tutorialsdo jo.com) to a mail server (mail.tutorialsdojo.com). Route traffic for a subdomain called operations.man ila.tutorialsdojo.com to the IP address of a differ ent host. Each record includes the name of a domain or a subd omain, a record type (for example, a record with a type of MX routes email), and other information app licable to the record type (for MX records, the hostname of one or more mail servers and a priority for each server). Route 53 has different routing policies that you ca n choose from. Below are some of the policies: Latency Routing lets Amazon Route 53 serve user req uests from the AWS Region that provides the lowest latency. It does not, however, guarantee that users in the same geographic region will be served from the same location. Geoproximity Routing lets Amazon Route 53 route tra ffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or les s to a given resource by specifying a value, known as a bi as. A bias expands or shrinks the size of the geogr aphic region from which traffic is routed to a resource. Geolocation Routing lets you choose the resources t hat serve your traffic based on the geographic loca tion of your users, meaning the location that DNS queries o riginate from. Weighted Routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (subdomain.tutorialsdojo.com) and ch oose how much traffic is routed to each resource. In this scenario, the problem requires a routing po licy that will let Route 53 route traffic to the re source in the Tokyo region from a larger portion of the Philippin es and North India. You need to use Geoproximity Routing and specify a bias to control the size of the geographic region f rom which traffic is routed to your resource. The sampl e image above uses a bias of -40 in the Tokyo regio n and a bias of 1 in the Sydney Region. Setting up the bias configuration in this manner would cause Route 53 to route traffic coming from the middle and northern part of the Philippines, as well as the northern part of I ndia to the resource in the Tokyo Region. Hence, the correct answer is: Geoproximity Routing. Geolocation Routing is incorrect because you cannot control the coverage size from which traffic is ro uted to your instance in Geolocation Routing. It just lets you choose the instances that will serve traffic ba sed on the location of your users. Latency Routing is incorrect because it is mainly u sed for improving performance by letting Route 53 serve user requests from the AWS Region that provid es the lowest latency. Weighted Routing is incorrect because it is used fo r routing traffic to multiple resources in proporti ons that you specify. This can be useful for load balancing and testing new versions of a software. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html#routing-policy- geoproximity https://docs.aws.amazon.com/Route53/latest/Develope rGuide/rrsets-working-with.html Latency Routing vs Geoproximity Routing vs Geolocat ion Routing: https://tutorialsdojo.com/latency-routing-vs-geopro ximity-routing-vs-geolocation-routing/", "references": "" }, { "question": ": All objects uploaded to an Amazon S3 bucket must be encrypted for security compliance. The bucket will use server-side encryption with Amazon S3-Managed encry ption keys (SSE-S3) to encrypt data using 256- bit Advanced Encryption Standard (AES-256) block ci pher. Which of the following request headers must be used ?", "options": [ "A. x-amz-server-side-encryption-customer-key", "B. x-amz-server-side-encryption", "C. x-amz-server-side-encryption-customer-algorith m", "D. x-amz-server-side-encryption-customer-key-MD5", "A. AWS Snowball Edge", "B. AWS Snowmobile", "C. AWS Direct Connect", "D. Amazon S3 Multipart Upload" ], "correct": "A. AWS Snowball Edge", "explanation": "Explanation AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local en vironment and the AWS Cloud. Each Snowball Edge device can transport data at spe eds faster than the internet. This transport is don e by shipping the data in the appliances through a regio nal carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. Th e AWS Snowball Edge device differs from the standar d Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality. Snowball Edge devices have three options for device configurations storage optimized, compute optimized, and with GPU. Hence, the correct answer is: AWS Snowball Edge. AWS Snowmobile is incorrect because this is an Exab yte-scale data transfer service used to move extremely large amounts of data to AWS. It is not s uitable for transferring a small amount of data, li ke 80 TB in this scenario. You can transfer up to 100PB per Sno wmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. A more c ost-effective solution here is to order a Snowball Edge device instead. AWS Direct Connect is incorrect because it is prima rily used to establish a dedicated network connecti on from your premises network to AWS. This is not suitable for one-time data transfer tasks, like what is depi cted in the scenario. Amazon S3 Multipart Upload is incorrect because thi s feature simply enables you to upload large object s in multiple parts. It still uses the same Internet con nection of the company, which means that the transf er will still take time due to its current bandwidth allocation. References: https://docs.aws.amazon.com/snowball/latest/ug/what issnowball.html https://docs.aws.amazon.com/snowball/latest/ug/devi ce-differences.html Check out this AWS Snowball Edge Cheat Sheet: https://tutorialsdojo.com/aws-snowball-edge/ AWS Snow Family Overview: https://youtu.be/9Ar-51Ip53Q", "references": "" }, { "question": ": One member of your DevOps team consulted you about a connectivity problem in one of your Amazon EC2 instances. The application architecture is init ially set up with four EC2 instances, each with an EIP address that all belong to a public non-default subnet. You launched another instance to handle the increasing workload of your application. The EC2 in stances also belong to the same security group. Everything works well as expected except for one of the EC2 instances which is not able to send nor receive traffic over the Internet. Which of the following is the MOST likely reason fo r this issue?", "options": [ "A. The EC2 instance is running in an Availability Zone that is not connected to an Internet", "B. The EC2 instance does not have a public IP add ress associated with it.", "C. The EC2 instance does not have a private IP ad dress associated with it.", "D. The route table is not properly configured to allow traffic to and from the Internet through" ], "correct": "B. The EC2 instance does not have a public IP add ress associated with it.", "explanation": "Explanation IP addresses enable resources in your VPC to commun icate with each other, and with resources over the Internet. Amazon EC2 and Amazon VPC support the IPv 4 and IPv6 addressing protocols. By default, Amazon EC2 and Amazon VPC use the IPv4 addressing protocol. When you create a VPC, you must assign it an IPv4 CIDR block (a range of priva te IPv4 addresses). Private IPv4 addresses are not reachable over the Internet. To connect to your ins tance over the Internet, or to enable communication between your instances and other AWS services that have pub lic endpoints, you can assign a globally- unique public IPv4 address to your instance. You can optionally associate an IPv6 CIDR block wit h your VPC and subnets, and assign IPv6 addresses from that block to the resources in your VPC. IPv6 addresses are public and reachable over the Interne t. All subnets have a modifiable attribute that determ ines whether a network interface created in that su bnet is assigned a public IPv4 address and, if applicable, an IPv6 address. This includes the primary network interface (eth0) that's created for an instance whe n you launch an instance in that subnet. Regardless of the subnet attribute, you can still override this setti ng for a specific instance during launch. By default, nondefault subnets have the IPv4 public addressing attribute set to false, and default sub nets have this attribute set to true. An exception is a nonde fault subnet created by the Amazon EC2 launch instance wizard -- the wizard sets the attribute to true. You can modify this attribute using the Amaz on VPC console. In this scenario, there are 5 EC2 instances that be long to the same security group that should be able to connect to the Internet. The main route table is pr operly configured but there is a problem connecting to one instance. Since the other four instances are workin g fine, we can assume that the security group and t he route table are correctly configured. One possible reason for this issue is that the problematic instance do es not have a public or an EIP address. Take note as well that the four EC2 instances all b elong to a public non-default subnet. Which means t hat a new EC2 instance will not have a public IP address by default since the since IPv4 public addressing a ttribute is initially set to false. Hence, the correct answer is the option that says: The EC2 instance does not have a public IP address associated with it. The option that says: The route table is not proper ly configured to allow traffic to and from the Inte rnet through the Internet gateway is incorrect because the other three instances, which are associated with the sam e route table and security group, do not have any issues. The option that says: The EC2 instance is running i n an Availability Zone that is not connected to an Internet gateway is incorrect because there is no r elationship between the Availability Zone and the Internet Gateway (IGW) that may have caused the iss ue. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Scenario1.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html#vpc-ip-addressing-subnet Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A start-up company that offers an intuitive financi al data analytics service has consulted you about t heir AWS architecture. They have a fleet of Amazon EC2 worke r instances that process financial data and then ou tputs reports which are used by their clients. You must s tore the generated report files in a durable storag e. The number of files to be stored can grow over time as the start-up company is expanding rapidly overseas and hence, they also need a way to distribute the repor ts faster to clients located across the globe. Which of the following is a cost-efficient and scal able storage option that you should use for this sc enario?", "options": [ "A. Use Amazon S3 as the data storage and CloudFro nt as the CDN.", "B. Use Amazon Redshift as the data storage and Cl oudFront as the CDN.", "C. Use Amazon Glacier as the data storage and Ela stiCache as the CDN.", "D. Use multiple EC2 instance stores for data stor age and ElastiCache as the CDN." ], "correct": "A. Use Amazon S3 as the data storage and CloudFro nt as the CDN.", "explanation": "Explanation A Content Delivery Network (CDN) is a critical comp onent of nearly any modern web application. It used to be that CDN merely improved the delivery of content by replicating commonly requested files (static conte nt) across a globally distributed set of caching server s. However, CDNs have become much more useful over time. For caching, a CDN will reduce the load on an appli cation origin and improve the experience of the requestor by delivering a local copy of the content from a nearby cache edge, or Point of Presence (Po P). The application origin is off the hook for opening the connection and delivering the content directly as t he CDN takes care of the heavy lifting. The end result is that t he application origins don't need to scale to meet demands for static content. Amazon CloudFront is a fast content delivery networ k (CDN) service that securely delivers data, videos , applications, and APIs to customers globally with l ow latency, high transfer speeds, all within a deve loper- friendly environment. CloudFront is integrated with AWS both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. Amazon S3 offers a highly durable, scalable, and se cure destination for backing up and archiving your critical data. This is the correct option as the st art-up company is looking for a durable storage to store the audio and text files. In addition, ElastiCache is o nly used for caching and not specifically as a Glob al Content Delivery Network (CDN). Using Amazon Redshift as the data storage and Cloud Front as the CDN is incorrect as Amazon Redshift is usually used as a Data Warehouse. Using Amazon S3 Glacier as the data storage and Ela stiCache as the CDN is incorrect as Amazon S3 Glacier is usually used for data archives. Using multiple EC2 instance stores for data storage and ElastiCache as the CDN is incorrect as data stored in an instance store is not durable. References: https://aws.amazon.com/s3/ https://aws.amazon.com/caching/cdn/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company launched a website that accepts high-qual ity photos and turns them into a downloadable video montage. The website offers a free and a premium ac count that guarantees faster processing. All reques ts by both free and premium members go through a single S QS queue and then processed by a group of EC2 instances that generate the videos. The company nee ds to ensure that the premium users who paid for th e service have higher priority than the free members. How should the company re-design its architecture t o address this requirement?", "options": [ "A. Use Amazon S3 to store and process the photos and then generate the video montage", "B. Create an SQS queue for free members and anoth er one for premium members. Configure", "C. For the requests made by premium members, set a higher priority in the SQS queue so it", "D. Use Amazon Kinesis to process the photos and g enerate the video montage in real-time." ], "correct": "B. Create an SQS queue for free members and anoth er one for premium members. Configure", "explanation": "Explanation Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that enables you to decouple and scale microservices, distributed syste ms, and serverless applications. SQS eliminates the complexity and overhead associated with managing an d operating message oriented middleware, and empowers developers to focus on differentiating wor k. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other service s to be available. In this scenario, it is best to create 2 separate S QS queues for each type of members. The SQS queues for the premium members can be polled first by the EC2 Inst ances and once completed, the messages from the fre e members can be processed next. Hence, the correct answer is: Create an SQS queue f or free members and another one for premium members. Configure your EC2 instances to consume me ssages from the premium queue first and if it is empty, poll from the free members' SQS queue. The option that says: For the requests made by prem ium members, set a higher priority in the SQS queue so it will be processed first compared to the requests made by free members is incorrect as you cannot set a priority to individual items in the SQ S queue. The option that says: Using Amazon Kinesis to proce ss the photos and generate the video montage in real time is incorrect as Amazon Kinesis is used to process streaming data and it is not applicable in this scenario. The option that says: Using Amazon S3 to store and process the photos and then generating the video montage afterwards is incorrect as Amazon S3 is use d for durable storage and not for processing data.", "references": "https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-best- practices.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" }, { "question": ": A company has developed public APIs hosted in Amazo n EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service cl ients can only access trusted IP addresses whitelis ted on their firewalls. What should you do to accomplish the above requirem ent?", "options": [ "A. Associate an Elastic IP address to an Applicat ion Load Balancer.", "B. Associate an Elastic IP address to a Network L oad Balancer.", "C. Create an Alias Record in Route 53 which maps to the DNS name of the load balancer.", "D. Create a CloudFront distribution whose origin points to the private IP addresses of your" ], "correct": "B. Associate an Elastic IP address to a Network L oad Balancer.", "explanation": "Explanation A Network Load Balancer functions at the fourth lay er of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. Afte r the load balancer receives a connection request, it selects a target from the default rule's target gro up. It attempts to open a TCP connection to the sel ected target on the port specified in the listener config uration. Based on the given scenario, web service clients ca n only access trusted IP addresses. To resolve this requirement, you can use the Bring Your Own IP (BYO IP) feature to use the trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB). T his way, there's no need to re-establish the whitel ists with new IP addresses. Hence, the correct answer is: Associate an Elastic IP address to a Network Load Balancer. The option that says: Associate an Elastic IP addre ss to an Application Load Balancer is incorrect because you can't assign an Elastic IP address to a n Application Load Balancer. The alternative method you can do is assign an Elastic IP address to a Network Load Balancer in front of the Application Load Balancer. The option that says: Create a CloudFront distribut ion whose origin points to the private IP addresses of your web servers is incorrect because web service client s can only access trusted IP addresses. The fastest way to resolve this requirement is to attach an Elastic IP address to a Network Load Balancer. The option that says: Create an Alias Record in Rou te 53 which maps to the DNS name of the load balancer is incorrect. This approach won't still al low them to access the application because of trust ed IP addresses on their firewalls. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/elb-attach-elastic-ip-to-public-nlb/ https://aws.amazon.com/blogs/networking-and-content -delivery/using-static-ip-addresses-for-application - load- balancers/ https://docs.aws.amazon.com/elasticloadbalancing/la test/network/introduction.html Check out this AWS Elastic Load Balancing Cheat She et: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", "references": "" }, { "question": ": An accounting application uses an RDS database conf igured with Multi-AZ deployments to improve availability. What would happen to RDS if the prima ry database instance fails?", "options": [ "A. A new database instance is created in the stan dby Availability Zone.", "B. The canonical name record (CNAME) is switched from the primary to standby instance.", "C. The IP address of the primary DB instance is s witched to the standby DB instance.", "D. The primary database instance will reboot." ], "correct": "B. The canonical name record (CNAME) is switched from the primary to standby instance.", "explanation": "Explanation In Amazon RDS, failover is automatically handled so that you can resume database operations as quickly as possible without administrative intervention in the event that your primary database instance went dow n. When failing over, Amazon RDS simply flips the cano nical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary. The option that says: The IP address of the primary DB instance is switched to the standby DB instance is incorrect since IP addresses are per su bnet, and subnets cannot span multiple AZs. The option that says: The primary database instance will reboot is incorrect since in the event of a failure, there is no database to reboot with. The option that says: A new database instance is cr eated in the standby Availability Zone is incorrect since with multi-AZ enabled, you already have a standby databa se in another AZ. References: https://aws.amazon.com/rds/details/multi-az/ https://aws.amazon.com/rds/faqs/ Amazon RDS Overview: https://www.youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A solutions architect is designing a cost-efficient , highly available storage solution for company dat a. One of the requirements is to ensure that the previous state o f a file is preserved and retrievable if a modified version of it is uploaded. Also, to meet regulatory compliance, d ata over 3 years must be retained in an archive and will only be accessible once a year. How should the solutions architect build the soluti on?", "options": [ "A. Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle", "B. Create an S3 Standard bucket and enable S3 Obj ect Lock in governance mode.", "C. Create an S3 Standard bucket with S3 Object Lo ck in compliance mode enabled then", "D. Create a One-Zone-IA bucket with object-level versioning enabled and configure a lifecycle" ], "correct": "A. Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle", "explanation": "Explanation Versioning in Amazon S3 is a means of keeping multi ple variants of an object in the same bucket. You c an use the S3 Versioning feature to preserve, retrieve , and restore every version of every object stored in your buckets. With versioning, you can recover more easi ly from both unintended user actions and applicatio n failures. After versioning is enabled for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of those objects. Hence, the correct answer is: Create an S3 Standard bucket with object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years. The S3 Object Lock feature allows you to store obje cts using a write-once-read-many (WORM) model. In the scenario, changes to objects are allowed but th eir previous versions should be preserved and remai n retrievable. If you enable the S3 Object Lock featu re, you won't be able to upload new versions of an object. This feature is only helpful when you want to preve nt objects from being deleted or overwritten for a fixed amount of time or indefinitely. Therefore, the following options are incorrect: - Create an S3 Standard bucket and enable S3 Object Lock in governance mode. - Create an S3 Standard bucket with S3 Object Lock in compliance mode enabled then configure a lifecycle rule that transfers files to Amazon S3 Gl acier Deep Archive after 3 years. The option that says: Create a One-Zone-IA bucket w ith object-level versioning enabled and configure a lifecycle rule that transfers files to Amazon S3 Glacier Deep Archive after 3 years is incorrect. On e- Zone-IA is not highly available as it only relies on one avail ability zone for storing data. References: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/Versioning.html https://aws.amazon.com/blogs/aws/new-amazon-s3-stor age-class-glacier-deep-archive/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": For data privacy, a healthcare company has been ask ed to comply with the Health Insurance Portability and Accountability Act (HIPAA). The company stores all its backups on an Amazon S3 bucket. It is required that data stored on the S3 bucket must be encrypted . What is the best option to do this? (Select TWO.)", "options": [ "A. Before sending the data to Amazon S3 over HTTP S, encrypt the data locally first using your", "B. Enable Server-Side Encryption on an S3 bucket to make use of AES-128 encryption.", "C. Store the data in encrypted EBS snapshots.", "D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3." ], "correct": "", "explanation": "Explanation Server-side encryption is about data encryption at rest--that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permi ssions, there is no difference in the way you acces s encrypted or unencrypted objects. For example, if y ou share your objects using a pre-signed URL, that URL works the same way for both encrypted and unenc rypted objects. You have three mutually exclusive options depending on how you choose to manage the encryption keys: Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Key s (SSE-KMS) Use Server-Side Encryption with Customer-Provided K eys (SSE-C) The options that say: Before sending the data to Am azon S3 over HTTPS, encrypt the data locally first using your own encryption keys and Enable Server-Si de Encryption on an S3 bucket to make use of AES-256 encryption are correct because these option s are using client-side encryption and Amazon S3- Managed Keys (SSE-S3) respectively. Client-side enc ryption is the act of encrypting data before sendin g it to Amazon S3 while SSE-S3 uses AES-256 encryption. Storing the data on EBS volumes with encryption ena bled instead of using Amazon S3 and storing the data in encrypted EBS snapshots are incorrect b ecause both options use EBS encryption and not S3. Enabling Server-Side Encryption on an S3 bucket to make use of AES-128 encryption is incorrect as S3 doesn't provide AES-128 encryption, only AES-256 . References: http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company is planning to launch an application whic h requires a data warehouse that will be used for t heir certkingdom infrequently accessed data. You need to use an EBS Volume that can handle large, sequential I/O operations. certkingdom certkingdom certkingdom Which of the following is the most cost-effective s torage type that you should use to meet the require ment? certkingdom certkingdom", "options": [ "A. Cold HDD (sc1)", "B. Throughput Optimized HDD (st1)", "C. Provisioned IOPS SSD (io1)", "D. EBS General Purpose SSD (gp2)" ], "correct": "A. Cold HDD (sc1)", "explanation": "Explanation certkingdom certkingdom certkingdom Cold HDD volumes provide low-cost magnetic storage that defines performance in terms of throughput certkingdom certkingdom rather than IOPS. With a lower throughput limit tha n Throughput Optimized HDD, this is a good fit idea l certkingdom for large, sequential cold-data workloads. If you r equire infrequent access to your data and are looki ng to certkingdom save costs, Cold HDD provides inexpensive block sto rage. Take note that bootable Cold HDD volumes are certkingdom not supported. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom Cold HDD provides the lowest cost HDD volume and is designed for less frequently accessed workloads. Hence, Cold HDD (sc1) is the correct answer. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type which has small, random I/O operations or large, sequential I/O operations. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom EBS General Purpose SSD (gp2) is incorrect because a General purpose SSD volume costs more and it is certkingdom mainly used for a wide variety of workloads. It is recommended to be used as system boot volumes, virt ual certkingdom desktops, low-latency interactive apps, and many mo re. certkingdom certkingdom Provisioned IOPS SSD (io1) is incorrect because thi s costs more than Cold HDD and thus, not cost- certkingdom certkingdom effective for this scenario. It provides the highes t performance SSD volume for mission-critical low-l atency certkingdom or high-throughput workloads, which is not needed i n the scenario. certkingdom certkingdom Throughput Optimized HDD (st1) is incorrect because this is primarily used for frequently accessed, certkingdom throughput-intensive workloads. In this scenario, C old HDD perfectly fits the requirement as it is use d for certkingdom their infrequently accessed data and provides the l owest cost, unlike Throughput Optimized HDD. certkingdom certkingdom References: certkingdom certkingdom https://aws.amazon.com/ebs/details/ certkingdom certkingdom certkingdom https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html certkingdom certkingdom certkingdom certkingdom certkingdom Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A company is receiving semi-structured and structur ed data from different sources every day. The Solutions Architect plans to use big data processin g frameworks to analyze vast amounts of data and ac cess it using various business intelligence tools and stand ard SQL queries. Which of the following provides the MOST high-perfo rming solution that fulfills this requirement?", "options": [ "A. Use Amazon Kinesis Data Analytics and store th e processed data in Amazon DynamoDB.", "B. Use AWS Glue and store the processed data in A mazon S3.", "C. Create an Amazon EC2 instance and store the pr ocessed data in Amazon EBS.", "D. Create an Amazon EMR cluster and store the pro cessed data in Amazon Redshift." ], "correct": "D. Create an Amazon EMR cluster and store the pro cessed data in Amazon Redshift.", "explanation": "Explanation Amazon EMR is a managed cluster platform that simpl ifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and anal yze vast amounts of data. By using these frameworks and related open-source projects, such a s Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence wo rkloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and o ut of other AWS data stores and databases. Amazon Redshift is the most widely used cloud data warehouse. It makes it fast, simple and cost-effect ive to analyze all your data using standard SQL and your e xisting Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petab ytes of structured and semi-structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. certkingdom certkingdom certkingdom certkingdom The key phrases in the scenario are \"big data proce ssing frameworks\" and \"various business intelligenc e certkingdom tools and standard SQL queries\" to analyze the data . To leverage big data processing frameworks, you n eed certkingdom to use Amazon EMR. The cluster will perform data tr ansformations (ETL) and load the processed data int o certkingdom Amazon Redshift for analytic and business intellige nce applications. certkingdom certkingdom Hence, the correct answer is: Create an Amazon EMR cluster and store the processed data in Amazon certkingdom Redshift. certkingdom certkingdom The option that says: Use AWS Glue and store the pr ocessed data in Amazon S3 is incorrect because certkingdom AWS Glue is just a serverless ETL service that craw ls your data, builds a data catalog, performs data certkingdom certkingdom preparation, data transformation, and data ingestio n. It won't allow you to utilize different big data certkingdom frameworks effectively, unlike Amazon EMR. In addit ion, the S3 Select feature in Amazon S3 can only ru n certkingdom simple SQL queries against a subset of data from a specific S3 object. To perform queries in the S3 bu cket, certkingdom you need to use Amazon Athena. certkingdom certkingdom The option that says: Use Amazon Kinesis Data Analy tics and store the processed data in Amazon certkingdom DynamoDB is incorrect because Amazon DynamoDB doesn 't fully support the use of standard SQL and certkingdom Business Intelligence (BI) tools, unlike Amazon Red shift. It also doesn't allow you to run complex ana lytic certkingdom queries against terabytes to petabytes of structure d and semi-structured data. certkingdom certkingdom certkingdom The option that says: Create an Amazon EC2 instance and store the processed data in Amazon EBS is certkingdom incorrect because a single EBS-backed EC2 instance is quite limited in its computing capability. Moreo ver, certkingdom it also entails an administrative overhead since yo u have to manually install and maintain the big dat a certkingdom frameworks for the EC2 instance yourself. The most suitable solution to leverage big data frameworks i s to certkingdom use EMR clusters. certkingdom certkingdom References: certkingdom certkingdom https://docs.aws.amazon.com/emr/latest/ManagementGu ide/emr-what-is-emr.html certkingdom certkingdom certkingdom https://docs.aws.amazon.com/redshift/latest/dg/load ing-data-from-emr.html certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom Check out this Amazon EMR Cheat Sheet: https://tutorialsdojo.com/amazon-emr/", "references": "" }, { "question": ": A company has a dynamic web app written in MEAN sta ck that is going to be launched in the next month. There is a probability that the traffic will be qui te high in the first couple of weeks. In the event of a load failure, how can you set up DNS failover to a stati c website?", "options": [ "A. Add more servers in case the application fails .", "B. Duplicate the exact application architecture i n another region and configure DNS weight-", "C. Enable failover to an application hosted in an on-premises data center.", "D. Use Route 53 with the failover option to a sta tic S3 website bucket or CloudFront" ], "correct": "D. Use Route 53 with the failover option to a sta tic S3 website bucket or CloudFront", "explanation": "Explanation For this scenario, using Route 53 with the failover option to a static S3 website bucket or CloudFront distribution is correct. You can create a new Route 53 with the failover option to a static S3 website bucket or CloudFront distribution as an alternative . Duplicating the exact application architecture in a nother region and configuring DNS weight-based routing is incorrect because running a duplicate sy stem is not a cost-effective solution. Remember tha t you are trying to build a failover mechanism for your web a pp, not a distributed setup. Enabling failover to an application hosted in an on -premises data center is incorrect. Although you ca n set up failover to your on-premises data center, you are n ot maximizing the AWS environment such as using Route 53 failover. Adding more servers in case the application fails i s incorrect because this is not the best way to han dle a failover event. If you add more servers only in cas e the application fails, then there would be a peri od of downtime in which your application is unavailable. Since there are no running servers on that period, your application will be unavailable for a certain perio d of time until your new server is up and running.", "references": "https://aws.amazon.com/premiumsupport/knowledge-cen ter/fail-over-s3-r53/ http://docs.aws.amazon.com/Route53/latest/Developer Guide/dns-failover.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" }, { "question": ": A company is running a custom application in an Aut o Scaling group of Amazon EC2 instances. Several instances are failing due to insufficient swap spac e. The Solutions Architect has been instructed to troubleshoot the issue and effectively monitor the available swap space of each EC2 instance. Which of the following options fulfills this requir ement?", "options": [ "A. Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.", "B. Create a new trail in AWS CloudTrail and confi gure Amazon CloudWatch Logs to monitor", "C. Create a CloudWatch dashboard and monitor the SwapUsed metric.", "D. Enable detailed monitoring on each instance an d monitor the SwapUtilization metric." ], "correct": "A. Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.", "explanation": "Explanation Amazon CloudWatch is a monitoring service for AWS c loud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and t rack metrics, collect and monitor log files, and se t alarms. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as we ll as custom metrics generated by your applications and services, and any log files your a pplications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilizati on, application performance, and operational health . The main requirement in the scenario is to monitor the SwapUtilization metric. Take note that you can' t use the default metrics of CloudWatch to monitor the Sw apUtilization metric. To monitor custom metrics, yo u must install the CloudWatch agent on the EC2 instan ce. After installing the CloudWatch agent, you can now collect system metrics and log files of an EC2 instance. Hence, the correct answer is: Install the CloudWatc h agent on each instance and monitor the SwapUtilization metric. The option that says: Enable detailed monitoring on each instance and monitor the SwapUtilization metric is incorrect because you can't monitor the S wapUtilization metric by just enabling the detailed monitoring option. You must install the CloudWatch agent on the instance. The option that says: Create a CloudWatch dashboard and monitor the SwapUsed metric is incorrect because you must install the CloudWatch agent first to add the custom metric in the dashboard. The option that says: Create a new trail in AWS Clo udTrail and configure Amazon CloudWatch Logs to monitor your trail logs is incorrect because Clo udTrail won't help you monitor custom metrics. CloudTrail is specifically used for monitoring API activities in an AWS account. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /mon-scripts.html https://aws.amazon.com/cloudwatch/faqs/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Amazon CloudWatch Overview: https://www.youtube.com/watch?v=q0DmxfyGkeU", "references": "" }, { "question": ": A start-up company has an EC2 instance that is host ing a web application. The volume of users is expec ted to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the abov e requirement for the given scenario? (Select TWO.)", "options": [ "A. Set up an AWS WAF behind your EC2 Instance.", "B. Set up an S3 Cache in front of the EC2 instanc e.", "C. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue.", "D. Set up two EC2 instances and use Route 53 to r oute traffic based on a Weighted Routing" ], "correct": "", "explanation": "Explanation Using an Elastic Load Balancer is an ideal solution for adding elasticity to your application. Alterna tively, you can also create a policy in Route 53, such as a Weighted routing policy, to evenly distribute the traffic to 2 or more EC2 instances. Hence, setting up two E C2 instances and then put them behind an Elastic Load balancer (ELB) and setting up two EC2 instance s and using Route 53 to route traffic based on a Weighted Routing Policy are the correct answers. Setting up an S3 Cache in front of the EC2 instance is incorrect because doing so does not provide elasticity and scalability to your EC2 instances. Setting up an AWS WAF behind your EC2 Instance is i ncorrect because AWS WAF is a web application firewall that helps protect your web ap plications from common web exploits. This service i s more on providing security to your applications. Setting up two EC2 instances deployed using Launch Templates and integrated with AWS Glue is incorrect because AWS Glue is a fully managed extra ct, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for an alytics. It does not provide scalability or elastic ity to your instances. References: https://aws.amazon.com/elasticloadbalancing http://docs.aws.amazon.com/Route53/latest/Developer Guide/Welcome.html Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ": A company plans to migrate its suite of containeriz ed applications running on-premises to a container service in AWS. The solution must be cloud-agnostic and use an open-source platform that can automatically manage containerized workloads and se rvices. It should also use the same configuration a nd tools across various production environments. What should the Solution Architect do to properly m igrate and satisfy the given requirement?", "options": [ "A. Migrate the application to Amazon Container Re gistry (ECR) with Amazon EC2 instance", "B. Migrate the application to Amazon Elastic Kube rnetes Service with EKS worker nodes.", "C. Migrate the application to Amazon Elastic Cont ainer Service with ECS tasks that use the", "D. Migrate the application to Amazon Elastic Cont ainer Service with ECS tasks that use the" ], "correct": "B. Migrate the application to Amazon Elastic Kube rnetes Service with EKS worker nodes.", "explanation": "Explanation Amazon EKS provisions and scales the Kubernetes con trol plane, including the API servers and backend persistence layer, across multiple AWS availability zones for high availability and fault tolerance. A mazon EKS automatically detects and replaces unhealthy contro l plane nodes and provides patching for the control plane. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications. These services include Elastic Load B alancing for load distribution, IAM for authenticat ion, Amazon VPC for isolation, and AWS CloudTrail for lo gging. certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom certkingdom To migrate the application to a container service, you can use Amazon ECS or Amazon EKS. But the key point in this scenario is cloud-agnostic and open-s ource platform. Take note that Amazon ECS is an AWS proprietary container service. This means that it i s not an open-source platform. Amazon EKS is a port able, extensible, and open-source platform for managing c ontainerized workloads and services. Kubernetes is considered cloud-agnostic because it allows you to move your containers to other cloud service provide rs. Amazon EKS runs up-to-date versions of the open-sou rce Kubernetes software, so you can use all of the existing plugins and tools from the Kubernetes comm unity. Applications running on Amazon EKS are fully compatible with applications running on any standar d Kubernetes environment, whether running in on- premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modifica tion required. Hence, the correct answer is: Migrate the applicati on to Amazon Elastic Kubernetes Service with EKS worker nodes. The option that says: Migrate the application to Am azon Container Registry (ECR) with Amazon EC2 instance worker nodes is incorrect because Amazon E CR is just a fully-managed Docker container registry. Also, this option is not an open-source p latform that can manage containerized workloads and services. The option that says: Migrate the application to Am azon Elastic Container Service with ECS tasks that use the AWS Fargate launch type is incorrect b ecause it is stated in the scenario that you have t o migrate the application suite to an open-source platform. A WS Fargate is just a serverless compute engine for containers. It is not cloud-agnostic since you cann ot use the same configuration and tools if you move d it to another cloud service provider such as Microsoft Az ure or Google Cloud Platform (GCP). The option that says: Migrate the application to Am azon Elastic Container Service with ECS tasks that use the Amazon EC2 launch type. is incorrect b ecause Amazon ECS is an AWS proprietary managed container orchestration service. You should use Amazon EKS since Kubernetes is an open-source platform and is considered cloud-agnostic. With Kub ernetes, you can use the same configuration and too ls that you're currently using in AWS even if you move your containers to another cloud service provider. References: https://docs.aws.amazon.com/eks/latest/userguide/wh at-is-eks.html https://aws.amazon.com/eks/faqs/ Check out our library of AWS Cheat Sheets: https://tutorialsdojo.com/links-to-all-aws-cheat-sh eets/", "references": "" }, { "question": ": A company recently adopted a hybrid architecture th at integrates its on-premises data center to AWS cl oud. You are assigned to configure the VPC and implement the required IAM users, IAM roles, IAM groups, and IAM policies. In this scenario, what is the best practice when cr eating IAM policies?", "options": [ "A. Determine what users need to do and then craft policies for them that let the users perform", "B. Grant all permissions to any EC2 user.", "C. Use the principle of least privilege which mea ns granting only the permissions required to", "D. Use the principle of least privilege which mea ns granting only the least number of people" ], "correct": "C. Use the principle of least privilege which mea ns granting only the permissions required to", "explanation": "Explanation One of the best practices in AWS IAM is to grant le ast privilege. When you create IAM policies, follow the standard s ecurity advice of granting least privilege--that is , granting only the permissions required to perform a task. De termine what users need to do and then craft polici es for them that let the users perform only those tasks. Therefore, using the principle of least privilege w hich means granting only the permissions required to perform a task is the correct answer. Start with a minimum set of permissions and grant a dditional permissions as necessary. Defining the ri ght set of permissions requires some understanding of the u ser's objectives. Determine what is required for th e specific task, what actions a particular service su pports, and what permissions are required in order to perform those actions. Granting all permissions to any EC2 user is incorre ct since you don't want your users to gain access t o everything and perform unnecessary actions. Doing s o is not a good security practice. Using the principle of least privilege which means granting only the least number of people with full root access is incorrect because this is not the co rrect definition of what the principle of least pri vilege is. Determining what users need to do and then craft po licies for them that let the users perform those tasks including additional administrative operation s is incorrect since there are some users who you should not give administrative access to. You shoul d follow the principle of least privilege when prov iding permissions and accesses to your resources.", "references": "https://docs.aws.amazon.com/IAM/latest/UserGuide/be st-practices.html#use-groups-for-permissions Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ Service Control Policies (SCP) vs IAM Policies: https://tutorialsdojo.com/service-control-policies- scp-vs-iam-policies/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/ Exam C" }, { "question": ": A company hosted a web application on a Linux Amazo n EC2 instance in the public subnet that uses a default network ACL. The instance uses a default se curity group and has an attached Elastic IP address . The network ACL has been configured to block all tr affic to the instance. The Solutions Architect must allow incoming traffic on port 443 to access the ap plication from any source. Which combination of steps will accomplish this req uirement? (Select TWO.)", "options": [ "A. In the Network ACL, update the rule to allow i nbound TCP connection on port 443 from", "B. In the Security Group, add a new rule to allow TCP connection on port 443 from source", "C. In the Security Group, create a new rule to al low TCP connection on port 443 to destination", "D. In the Network ACL, update the rule to allow o utbound TCP connection on port 32768 -", "A. It enables you to establish a private and ded icated network connection between your", "B. It provides a cost-effective, hybrid connecti on from your VPC to your on-premises data", "C. It allows you to connect your AWS cloud resou rces to your on-premises data center using", "D. It provides a networking connection between t wo VPCs which enables you to route traffic" ], "correct": "C. It allows you to connect your AWS cloud resou rces to your on-premises data center using", "explanation": "Explanation Amazon VPC offers you the flexibility to fully mana ge both sides of your Amazon VPC connectivity by creating a VPN connection between your remote netwo rk and a software VPN appliance running in your Amazon VPC network. This option is recommended if y ou must manage both ends of the VPN connection either for compliance purposes or for leveraging ga teway devices that are not currently supported by Amazon VPC's VPN solution. You can connect your Amazon VPC to remote networks and users using the following VPN connectivity options: AWS Site-to-Site VPN - creates an IPsec VPN connect ion between your VPC and your remote network. On the AWS side of the Site-to-Site VPN connection, a virtual private gateway or transit gateway provi des two VPN endpoints (tunnels) for automatic failover. AWS Client VPN - a managed client-based VPN service that provides secure TLS VPN connections between your AWS resources and on-premises networks . AWS VPN CloudHub - capable of wiring multiple AWS S ite-to-Site VPN connections together on a virtual private gateway. This is useful if you want to enable communication between different remote networks that uses a Site-to-Site VPN connection. Third-party software VPN appliance - You can create a VPN connection to your remote network by using an Amazon EC2 instance in your VPC that's run ning a third party software VPN appliance. 5 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam With a VPN connection, you can connect to an Amazon VPC in the cloud the same way you connect to your branches while establishing secure and private sessions with IP Security (IPSec) or Transport Lay er Security (TLS) tunnels. Hence, the correct answer is the option that says: It allows you to connect your AWS cloud resources t o your on-premises data center using secure and priva te sessions with IP Security (IPSec) or Transport Layer Security (TLS) tunnels since one of the main advantages of having a VPN connection is that you will be able to connect your Amazon VPC to other re mote networks securely. The option that says: It provides a cost-effective, hybrid connection from your VPC to your on- premises data centers which bypasses the public Int ernet is incorrect. Although it is true that a VPN provides a cost-effective, hybrid connection from y our VPC to your on-premises data centers, it certai nly does not bypass the public Internet. A VPN connecti on actually goes through the public Internet, unlik e the AWS Direct Connect connection which has a direct an d dedicated connection to your on-premises network. The option that says: It provides a networking conn ection between two VPCs which enables you to route traffic between them using private IPv4 addre sses or IPv6 addresses is incorrect because this actually describes VPC Peering and not a VPN connec tion. The option that says: It enables you to establish a private and dedicated network connection between your network and your VPC is incorrect because this is the advantage of an AWS Direct Connect connection and not a VPN. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/vpn-connections.html 6 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/whitepapers/latest/aws- vpc-connectivity-options/software-vpn-network-to- amazon.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company has an e-commerce application that saves the transaction logs to an S3 bucket. You are instructed by the CTO to configure the application to keep the transaction logs for one month for troubleshooting purposes, and then afterward, purge the logs. What should you do to accomplish this requirement?", "options": [ "A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the", "B. Add a new bucket policy on the Amazon S3 bucke t.", "C. Create a new IAM policy for the Amazon S3 buck et that automatically deletes the logs after", "D. Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data" ], "correct": "A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the", "explanation": "Explanation In this scenario, the best way to accomplish the re quirement is to simply configure the lifecycle configuration rules on the Amazon S3 bucket to purg e the transaction logs after a month. 7 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified a s follows: Transition actions In which you define when object s transition to another storage class. For example, you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACI ER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf. Hence, the correct answer is: Configure the lifecyc le configuration rules on the Amazon S3 bucket to purge the transaction logs after a month. The option that says: Add a new bucket policy on th e Amazon S3 bucket is incorrect as it does not provide a solution to any of your needs in this sce nario. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for th e bucket and the objects in it. The option that says: Create a new IAM policy for t he Amazon S3 bucket that automatically deletes the logs after a month is incorrect because IAM pol icies are primarily used to specify what actions ar e allowed or denied on your S3 buckets. You cannot co nfigure an IAM policy to automatically purge logs f or you in any way. 8 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data is incorrect. CORS allows client web applications that are loaded in one doma in to interact with resources in a different domain. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.amazonaws.cn/en_us/AmazonS3/latest/use rguide/lifecycle-transition-general- considerations.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A Solutions Architect is working for a large insura nce firm. To maintain compliance with HIPAA laws, a ll data that is backed up or stored on Amazon S3 needs to be encrypted at rest. In this scenario, what is the best method of encryp tion for the data, assuming S3 is being used for st oring financial-related data? (Select TWO.)", "options": [ "A. Store the data in encrypted EBS snapshots", "B. Encrypt the data using your own encryption key s then copy the data to Amazon S3 over", "C. Enable SSE on an S3 bucket to make use of AES- 256 encryption", "D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3" ], "correct": "", "explanation": "Explanation Data protection refers to protecting data while in- transit (as it travels to and from Amazon S3) and a t rest (while it is stored on disks in Amazon S3 data cent ers). You can protect data in transit by using SSL or by using client-side encryption. You have the followin g options for protecting data at rest in Amazon S3. 9 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download t he objects. Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process , the encryption keys, and related tools. Hence, the following options are the correct answer s: - Enable SSE on an S3 bucket to make use of AES-256 encryption - Encrypt the data using your own encryption keys t hen copy the data to Amazon S3 over HTTPS endpoints. This refers to using a Server-Side Encry ption with Customer-Provided Keys (SSE-C). Storing the data in encrypted EBS snapshots and sto ring the data on EBS volumes with encryption enabled instead of using Amazon S3 are both incorre ct because all these options are for protecting you r data in your EBS volumes. Note that an S3 bucket do es not use EBS volumes to store your data. Using AWS Shield to protect your data at rest is in correct because AWS Shield is mainly used to protec t your entire VPC against DDoS attacks. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": 10 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A Solutions Architect working for a startup is desi gning a High Performance Computing (HPC) applicatio n which is publicly accessible for their customers. T he startup founders want to mitigate distributed de nial- of-service (DDoS) attacks on their application. Which of the following options are not suitable to be implemented in this scenario? (Select TWO.)", "options": [ "A. Use Dedicated EC2 instances to ensure that eac h instance has the maximum performance possible.", "B. Add multiple Elastic Fabric Adapters (EFA) to each EC2 instance to increase the network", "C. Use an Application Load Balancer with Auto Sca ling groups for your EC2 instances.", "D. Use AWS Shield and AWS WAF." ], "correct": "", "explanation": "Explanation Take note that the question asks about the viable m itigation techniques that are NOT suitable to preve nt Distributed Denial of Service (DDoS) attack. A Denial of Service (DoS) attack is an attack that can make your website or application unavailable to end users. To achieve this, attackers use a variety of techniques that consume network or other resources, disrupting access for legitimate end users. To protect your system from DDoS attack, you can do the following: - Use an Amazon CloudFront service for distributing both static and dynamic content. - Use an Application Load Balancer with Auto Scalin g groups for your EC2 instances then restrict direc t Internet traffic to your Amazon RDS database by dep loying to a private subnet. - Set up alerts in Amazon CloudWatch to look for hi gh Network In and CPU utilization metrics. Services that are available within AWS Regions, lik e Elastic Load Balancing and Amazon Elastic Compute Cloud (EC2), allow you to build Distributed Denial of Service resiliency and scale to handle unexpecte d volumes of traffic within a given region. Services that are available in AWS edge locations, like Amaz on CloudFront, AWS WAF, Amazon Route53, and Amazon API Gateway, allow you to take advantage of a global network of edge locations that can provide y our application with greater fault tolerance and increased scale for managing larger volumes of traf fic. 11 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In addition, you can also use AWS Shield and AWS WA F to fortify your cloud network. AWS Shield is a managed DDoS protection service that is available i n two tiers: Standard and Advanced. AWS Shield Standard applies always-on detection and inline mit igation techniques, such as deterministic packet filtering and priority-based traffic shaping, to mi nimize application downtime and latency. AWS WAF is a web application firewall that helps pr otect web applications from common web exploits that could affect application availability, comprom ise security, or consume excessive resources. You c an use AWS WAF to define customizable web security rul es that control which traffic accesses your web applications. If you use AWS Shield Advanced, you c an use AWS WAF at no extra cost for those protected resources and can engage the DRT to create WAF rule s. Using Dedicated EC2 instances to ensure that each i nstance has the maximum performance possible is not a viable mitigation technique because Dedica ted EC2 instances are just an instance billing opti on. Although it may ensure that each instance gives the maximum performance, that by itself is not enough to mitigate a DDoS attack. Adding multiple Elastic Fabric Adapters (EFA) to ea ch EC2 instance to increase the network bandwidth is also not a viable option as this is ma inly done for performance improvement, and not for DDoS attack mitigation. Moreover, you can attach on ly one EFA per EC2 instance. An Elastic Fabric Adapter (EFA) is a network device that you can atta ch to your Amazon EC2 instance to accelerate High- Performance Computing (HPC) and machine learning ap plications. The following options are valid mitigation techniqu es that can be used to prevent DDoS: - Use an Amazon CloudFront service for distributing both static and dynamic content. 12 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - Use an Application Load Balancer with Auto Scalin g groups for your EC2 instances. Prevent direct Internet traffic to your Amazon RDS database by dep loying it to a new private subnet. - Use AWS Shield and AWS WAF. References: https://aws.amazon.com/answers/networking/aws-ddos- attack-mitigation/ https://d0.awsstatic.com/whitepapers/DDoS_White_Pap er_June2015.pdf Best practices on DDoS Attack Mitigation: https://youtu.be/HnoZS5jj7pk/", "references": "" }, { "question": ": An application needs to retrieve a subset of data f rom a large CSV file stored in an Amazon S3 bucket by using simple SQL expressions. The queries are made within Amazon S3 and must only return the needed data. Which of the following actions should be taken?", "options": [ "A. Perform an S3 Select operation based on the bu cket's name and object's metadata.", "B. Perform an S3 Select operation based on the bu cket's name and object tags.", "C. Perform an S3 Select operation based on the bu cket's name.", "D. Perform an S3 Select operation based on the bu cket's name and object's key." ], "correct": "D. Perform an S3 Select operation based on the bu cket's name and object's key.", "explanation": "Explanation S3 Select enables applications to retrieve only a s ubset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only th e data needed by your application, you can achieve drastic performance increases. 13 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. certkingdom. Amazon S3 is composed of buckets, object keys, obje ct metadata, object tags, and many other components certkingdom. as shown below: An Amazon S3 bucket name is globally unique, and th e namespace is shared by all AWS accounts. An Amazon S3 object key refers to the key name, whi ch uniquely identifies the object in the bucket. An Amazon S3 object metadata is a name-value pair t hat provides information about the object. certkingdom. An Amazon S3 object tag is a key-pair value used fo r object tagging to categorize storage. You can perform S3 Select to query only the necessa ry data inside the CSV files based on the bucket's name and the object's key. The following snippet below shows how it is done us ing boto3 ( AWS SDK for Python ): 14 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam client = boto3.client('s3') resp = client.select_object_content( Bucket='tdojo-bucket', # Bucket Name. Key='s3-select/tutorialsdojofile.csv', # Object Key . ExpressionType= 'SQL', certkingdom. Expression = \"select \\\"Sample\\\" from s3object s whe re s.\\\"tutorialsdojofile\\\" in ['A', 'B']\" Hence, the correct answer is the option that says: Perform an S3 Select operation based on the bucket' s name and object's key. The option that says: Perform an S3 Select operatio n based on the bucket's name and object's certkingdom. metadata is incorrect because metadata is not neede d when querying subsets of data in an object using S3 Select. The option that says: Perform an S3 Select operatio n based on the bucket's name and object tags is incorrect because object tags just provide addition al information to your object. This is not needed w hen querying with S3 Select although this can be useful for S3 Batch Operations. You can categorize object s based on tag values to provide S3 Batch Operations with a list of objects to operate on. certkingdom. The option that says: Perform an S3 Select operatio n based on the bucket's name is incorrect because you need both the bucket's name and the object key to successfully perform an S3 Select operation. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/s3- glacier-select-sql-reference-select.html certkingdom. https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngObjects.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": certkingdom. A startup has resources deployed on the AWS Cloud. It is now going through a set of scheduled audits b y an external auditing firm for compliance. 15 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Which of the following services available in AWS ca n be utilized to help ensure the right information are present for auditing purposes?", "options": [ "A. Amazon CloudWatch", "B. Amazon EC2", "C. AWS CloudTrail", "D. Amazon VPC" ], "correct": "C. AWS CloudTrail", "explanation": "Explanation AWS CloudTrail is a service that enables governance , compliance, operational auditing, and risk auditi ng of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS accou nt activity, including actions taken through the AWS M anagement Console, AWS SDKs, command line tools, certkingdom. and other AWS services. This event history simplifi es security analysis, resource change tracking, and troubleshooting. certkingdom. CloudTrail provides visibility into user activity b y recording actions taken on your account. CloudTra il records important information about each action, in cluding who made the request, the services used, th e certkingdom. actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and troubleshoot operational issues. CloudTrail makes it easier to ensure compli ance with internal policies and regulatory standard s. Hence, the correct answer is: AWS CloudTrail. Amazon VPC is incorrect because a VPC is a logicall y isolated section of the AWS Cloud where you can certkingdom. launch AWS resources in a virtual network that you define. It does not provide you the auditing information that were asked for in this scenario. 16 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EC2 is incorrect because EC2 is a service th at provides secure, resizable compute capacity in t he cloud and does not provide the needed information i n this scenario just like the option above. Amazon CloudWatch is incorrect because this is a mo nitoring tool for your AWS resources. Like the above options, it does not provide the needed infor mation to satisfy the requirement in the scenario.", "references": "https://aws.amazon.com/cloudtrail/ certkingdom. Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/ certkingdom." }, { "question": ": A Solutions Architect is designing a highly availab le environment for an application. She plans to hos t the application on EC2 instances within an Auto Scaling Group. One of the conditions requires data stored on root EBS volumes to be preserved if an instance ter minates. What should be done to satisfy the requirement? certkingdom.", "options": [ "A. Enable the Termination Protection option for a ll EC2 instances.", "B. Set the value of DeleteOnTermination attribute of the EBS volumes to False.", "C. Configure ASG to suspend the health check proc ess for each EC2 instance.", "D. Use AWS DataSync to replicate root volume data to Amazon S3." ], "correct": "C. Configure ASG to suspend the health check proc ess for each EC2 instance.", "explanation": "Explanation By default, Amazon EBS root device volumes are auto matically deleted when the instance terminates. However, by default, any additional EBS volumes tha t you attach at launch, or any EBS volumes that you attach to an existing instance persist even after t he instance terminates. This behavior is controlled by the volume's DeleteOnTermination attribute, which you c an modify. certkingdom. To preserve the root volume when an instance termin ates, change the DeleteOnTermination attribute for the root volume to False. 17 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam This EBS attribute can be changed through the AWS C onsole upon launching the instance or through CLI/API command. Hence, the correct answer is the option that says: Set the value of DeleteOnTermination attribute of t he EBS volumes to False. certkingdom. The option that says: Use AWS DataSync to replicate root volume data to Amazon S3 is incorrect certkingdom. because AWS DataSync does not work with Amazon EBS volumes. DataSync can copy data between Network File System (NFS) shares, Server Message Bl ock (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windo ws File Server file systems. The option that says: Configure ASG to suspend the health check process for each EC2 instance is certkingdom. incorrect because suspending the health check proce ss will prevent the ASG from replacing unhealthy EC 2 certkingdom. instances. This can cause availability issues to th e application. The option that says: Enable the Termination Protec tion option for all EC2 instances is incorrect. Termination Protection will just prevent your insta nce from being accidentally terminated using the Amazon EC2 console. References: certkingdom. certkingdom. https://aws.amazon.com/premiumsupport/knowledge-cen ter/deleteontermination-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /terminating-instances.html Check out this Amazon EBS Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-ebs/ certkingdom.", "references": "" }, { "question": ": A large telecommunications company needs to run ana lytics against all combined log files from the Application Load Balancer as part of the regulatory requirements. Which AWS services can be used together to collect logs and then easily perform log analysis? certkingdom. certkingdom.", "options": [ "A. Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files", "B. Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.", "C. Amazon DynamoDB for storing and EC2 for analyz ing the logs.", "D. Amazon EC2 with EBS volumes for storing and an alyzing the log files." ], "correct": "B. Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.", "explanation": "Explanation In this scenario, it is best to use a combination o f Amazon S3 and Amazon EMR: Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files. Access logging in the ELB is stored in Amazon S3 which means that the following are valid options: certkingdom. - Amazon S3 for storing the ELB log files and an EC 2 instance for analyzing the log files using a cust om- built application. - Amazon S3 for storing ELB log files and Amazon EM R for analyzing the log files. However, log analysis can be automatically provided by Amazon EMR, which is more economical than certkingdom. building a custom-built log analysis application an d hosting it in EC2. Hence, the option that says: A mazon S3 for storing ELB log files and Amazon EMR for ana lyzing the log files is the best answer between the two. Access logging is an optional feature of Elastic Lo ad Balancing that is disabled by default. After you enable access logging for your load balancer, Elast ic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. certkingdom. certkingdom. certkingdom. 19 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically sca lable Amazon EC2 instances. It securely and reliabl y handles a broad set of big data use cases, includin g log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific si mulation, and bioinformatics. You can also run othe r popular distributed frameworks such as Apache Spark , HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. The option that says: Amazon DynamoDB for storing a nd EC2 for analyzing the logs is incorrect because DynamoDB is a noSQL database solution of AW S. It would be inefficient to store logs in DynamoDB while using EC2 to analyze them. The option that says: Amazon EC2 with EBS volumes f or storing and analyzing the log files is incorrect certkingdom. because using EC2 with EBS would be costly, and EBS might not provide the most durable storage for your logs, unlike S3. The option that says: Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application is incor rect because using EC2 to analyze logs would be inefficient and expensive since you will have to pr ogram the analyzer yourself. References: certkingdom. https://aws.amazon.com/emr/ https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-access-logs.html Check out this Amazon EMR Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-emr/ Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", "references": "" }, { "question": ": certkingdom. A company deployed a high-performance computing (HP C) cluster that spans multiple EC2 instances across multiple Availability Zones and processes va rious wind simulation models. Currently, the Soluti ons Architect is experiencing a slowdown in their appli cations and upon further investigation, it was disc overed that it was due to latency issues. Which is the MOST suitable solution that the Soluti ons Architect should implement to provide low-laten cy network performance necessary for tightly-coupled n ode-to-node communication of the HPC cluster? certkingdom. 20 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "A. Set up AWS Direct Connect connections across m ultiple Availability Zones for increased", "B. Set up a spread placement group across multipl e Availability Zones in multiple AWS", "C. Set up a cluster placement group within a sing le Availability Zone in the same AWS Region.", "D. Use EC2 Dedicated Instances." ], "correct": "C. Set up a cluster placement group within a sing le Availability Zone in the same AWS Region.", "explanation": "Explanation When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can u se certkingdom. placement groups to influence the placement of a gr oup of interdependent instances to meet the needs o f your workload. Depending on the type of workload, y ou can create a placement group using one of the following placement strategies: Cluster packs instances close together inside an A vailability Zone. This strategy enables workloads t o achieve the low-latency network performance necessa ry for tightly-coupled node-to-node communication that is typical of HPC applications. certkingdom. Partition spreads your instances across logical pa rtitions such that groups of instances in one parti tion do not share the underlying hardware with groups of in stances in different partitions. This strategy is t ypically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka. Spread strictly places a small group of instances across distinct underlying hardware to reduce corre lated failures. certkingdom. Cluster placement groups are recommended for applic ations that benefit from low network latency, high network throughput, or both. They are also recommen ded when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choos e an instance type that supports enhanced networking. certkingdom. certkingdom. 21 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Partition placement groups can be used to deploy la rge distributed and replicated workloads, such as H DFS, HBase, and Cassandra, across distinct racks. When y ou launch instances into a partition placement grou p, certkingdom. Amazon EC2 tries to distribute the instances evenly across the number of partitions that you specify. You can also launch instances into a specific partition to have more control over where the instances are placed. certkingdom. certkingdom. Spread placement groups are recommended for applica tions that have a small number of critical instance s that should be kept separate from each other. Launc hing instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks. Spread placement groups provide access to distinct racks, and are th erefore suitable for mixing instance types or launc hing instances over time. A spread placement group can s pan multiple Availability Zones in the same Region. You can have a maximum of seven running instances p er Availability Zone per group. certkingdom. certkingdom. 22 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is: Set up a cluster plac ement group within a single Availability Zone in th e same AWS Region. certkingdom. The option that says: Set up a spread placement gro up across multiple Availability Zones in multiple AWS Regions is incorrect because although using a p lacement group is valid for this particular scenari o, you can only set up a placement group in a single A WS Region only. A spread placement group can span multiple Availability Zones in the same Region. The option that says: Set up AWS Direct Connect con nections across multiple Availability Zones for increased bandwidth throughput and more consistent network experience is incorrect because this is certkingdom. primarily used for hybrid architectures. It bypasse s the public Internet and establishes a secure, ded icated connection from your on-premises data center into A WS, and not used for having low latency within your AWS network. The option that says: Use EC2 Dedicated Instances i s incorrect because these are EC2 instances that ru n in a VPC on hardware that is dedicated to a single cus tomer and are physically isolated at the host hardw are level from instances that belong to other AWS accou nts. It is not used for reducing latency. certkingdom. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ placement-groups.html https://aws.amazon.com/hpc/ Check out this Amazon EC2 Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": 23 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam An investment bank is working with an IT team to ha ndle the launch of the new digital wallet system. T he applications will run on multiple EBS-backed EC2 in stances which will store the logs, transactions, an d billing statements of the user in an S3 bucket. Due to tight security and compliance requirements, the IT team is exploring options on how to safely store se nsitive data on the EBS volumes and S3. Which of the below options should be carried out wh en storing sensitive data on AWS? (Select TWO.)", "options": [ "A. Create an EBS Snapshot", "B. Enable Amazon S3 Server-Side or use Client-Sid e Encryption", "C. Enable EBS Encryption", "D. Migrate the EC2 instances from the public to p rivate subnet." ], "correct": "", "explanation": "Explanation Enabling EBS Encryption and enabling Amazon S3 Serv er-Side or use Client-Side Encryption are correct. Amazon EBS encryption offers a simple encr yption solution for your EBS volumes without the need to build, maintain, and secure your own key ma nagement infrastructure. certkingdom. certkingdom. certkingdom. certkingdom. In Amazon S3, data protection refers to protecting data while in-transit (as it travels to and from Am azon S3) and at rest (while it is stored on disks in Ama zon S3 data centers). You can protect data in trans it by 24 of 137 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam using SSL or by using client-side encryption. You h ave the following options to protect data at rest i n Amazon S3. Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download t he objects. Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process , the encryption keys, and related tools. Creating an EBS Snapshot is incorrect because this is a backup solution of EBS. It does not provide security of data inside EBS volumes when executed. Migrating the EC2 instances from the public to priv ate subnet is incorrect because the data you want t o secure are those in EBS volumes and S3 buckets. Mov ing your EC2 instance to a private subnet involves a different matter of security practice, which does n ot achieve what you want in this scenario. Using AWS Shield and WAF is incorrect because these protect you from common security threats for your web applications. However, what you are trying to achieve is securing and encrypting your data in side EBS and S3. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSEncryption.html http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A Solutions Architect is working for a large IT con sulting firm. One of the clients is launching a fil e sharing web application in AWS which requires a dur able storage service for hosting their static conte nts such as PDFs, Word Documents, high-resolution image s, and many others. Which type of storage service should the Architect use to meet this requirement?", "options": [ "A. Amazon RDS instance", "B. Amazon EBS volume", "C. Amazon EC2 instance store", "D. Amazon S3" ], "correct": "D. Amazon S3", "explanation": "Explanation Amazon S3 is storage for the Internet. It's a simpl e storage service that offers software developers a durable, highly-scalable, reliable, and low-latency data storage infrastructure at very low costs. Ama zon S3 provides customers with a highly durable storage in frastructure. Versioning offers an additional level of protection by providing a means of recovery when cu stomers accidentally overwrite or delete objects. Remember that the scenario requires a durable stora ge for static content. These two keywords are actua lly referring to S3, since it is highly durable and sui table for storing static content. Hence, Amazon S3 is the correct answer. certkingdom. certkingdom. certkingdom. certkingdom. 26 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EBS volume is incorrect because this is not as durable compared with S3. In addition, it is bes t to store the static contents in S3 rather than EBS. Amazon EC2 instance store is incorrect because it i s definitely not suitable - the data it holds will be wiped out immediately once the EC2 instance is rest arted. Amazon RDS instance is incorrect because an RDS ins tance is just a database and not suitable for stori ng static content. By default, RDS is not durable, unl ess you launch it to be in Multi-AZ deployments configuration.", "references": "https://aws.amazon.com/s3/faqs/ https://d1.awsstatic.com/whitepapers/Storage/AWS%20 Storage%20Services%20Whitepaper- v9.pdf#page=24 certkingdom. Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" }, { "question": ":An on-premises server is using an SMB network file share to store application data. The application produces around 50 MB of data per day but it only n eeds to access some of it for daily processes. To s ave certkingdom. on storage costs, the company plans to copy all the application data to AWS, however, they want to ret ain the ability to retrieve data with the same low-late ncy access as the local file share. The company doe s not have the capacity to develop the needed tool for th is operation. Which AWS service should the company use?", "options": [ "A. AWS Storage Gateway", "B. Amazon FSx for Windows File Server", "C. AWS Virtual Private Network (VPN)", "D. AWS Snowball Edge" ], "correct": "A. AWS Storage Gateway", "explanation": "Explanation certkingdom. AWS Storage Gateway is a hybrid cloud storage servi ce that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gate way to simplify storage management and reduce costs 27 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam for key hybrid cloud storage use cases. These inclu de moving backups to the cloud, using on-premises f ile shares backed by cloud storage, and providing low l atency access to data in AWS for on-premises applications. certkingdom. certkingdom. Specifically for this scenario, you can use Amazon FSx File Gateway to support the SMB file share for the on-premises application. It also meets the requirem ent for low-latency access. Amazon FSx File Gateway helps accelerate your file-based storage migration to the cloud to enable faster performance, improved data protection, and reduced cost. Hence, the correct answer is: AWS Storage Gateway. certkingdom. AWS Virtual Private Network (VPN) is incorrect beca use this service is mainly used for establishing encryption connections from an on-premises network to AWS. Amazon FSx for Windows File Server is incorrect. Th is won't provide low-latency access since all the files are stored on AWS, which means that they will be accessed via the internet. AWS Storage Gateway supports local caching without any development over head making it suitable for low-latency application s. certkingdom. 28 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS Snowball Edge is incorrect. A Snowball edge is a type of Snowball device with on-board storage and compute power that can do local processing in a ddition to transferring data between your local environment and the AWS Cloud. It's just a data mig ration tool and not a storage service. References: https://aws.amazon.com/storagegateway/ https://docs.aws.amazon.com/storagegateway/latest/u serguide/CreatingAnSMBFileShare.html AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", "references": "" }, { "question": ": certkingdom. A company is setting up a cloud architecture for an international money transfer service to be deploye d in AWS which will have thousands of users around the g lobe. The service should be available 24/7 to avoid certkingdom. any business disruption and should be resilient eno ugh to handle the outage of an entire AWS region. T o meet this requirement, the Solutions Architect has deployed their AWS resources to multiple AWS Region s. He needs to use Route 53 and configure it to set al l of the resources to be available all the time as much as possible. When a resource becomes unavailable, Rout e 53 should detect that it's unhealthy and stop certkingdom. including it when responding to queries. Which of the following is the most fault-tolerant r outing configuration that the Solutions Architect s hould certkingdom. use in this scenario?", "options": [ "A. Configure an Active-Active Failover with One P rimary and One Secondary Resource.", "B. Configure an Active-Passive Failover with Mult iple Primary and Secondary Resources.", "C. Configure an Active-Passive Failover with Weig hted Records.", "D. Configure an Active-Active Failover with Weigh ted routing policy." ], "correct": "D. Configure an Active-Active Failover with Weigh ted routing policy.", "explanation": "Explanation certkingdom. certkingdom. 29 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can use Route 53 health checking to configure a ctive-active and active-passive failover configurations. You configure active-active failove r using any routing policy (or combination of routi ng policies) other than failover, and you configure ac tive-passive failover using the failover routing po licy. certkingdom. certkingdom. Active-Active Failover Use this failover configuration when you want all o f your resources to be available the majority of th e time. When a resource becomes unavailable, Route 53 can d etect that it's unhealthy and stop including it whe n certkingdom. responding to queries. In active-active failover, all the records that hav e the same name, the same type (such as A or AAAA), and certkingdom. the same routing policy (such as weighted or latenc y) are active unless Route 53 considers them unheal thy. Route 53 can respond to a DNS query using any healt hy record. Active-Passive Failover certkingdom. Use an active-passive failover configuration when y ou want a primary resource or group of resources to be available the majority of the time and you want a s econdary resource or group of resources to be on st andby certkingdom. in case all the primary resources become unavailabl e. When responding to queries, Route 53 includes on ly the healthy primary resources. If all the primary r esources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries. certkingdom. Configuring an Active-Passive Failover with Weighte d Records and configuring an Active-Passive Failover with Multiple Primary and Secondary Resour ces are incorrect because an Active-Passive 30 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Failover is mainly used when you want a primary res ource or group of resources to be available most of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. In this scenario, all of your resources should be available all the time as much as possible which is why you have to use an Ac tive-Active Failover instead. Configuring an Active-Active Failover with One Prim ary and One Secondary Resource is incorrect because you cannot set up an Active-Active Failover with One Primary and One Secondary Resource. Remember that an Active-Active Failover uses all av ailable resources all the time without a primary no r a secondary resource. References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-types.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/dns-failover-configuring.html Amazon Route 53 Overview: https://www.youtube.com/watch?v=Su308t19ubY certkingdom. Check out this Amazon Route 53 Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ":certkingdom. A company has a global online trading platform in w hich the users from all over the world regularly up load terabytes of transactional data to a centralized S3 bucket. certkingdom. What AWS feature should you use in your present sys tem to improve throughput and ensure consistently fast data transfer to the Amazon S3 bucket, regardl ess of your user's location?", "options": [ "A. Use CloudFront Origin Access Identity", "B. Amazon S3 Transfer Acceleration", "C. FTP", "D. AWS Direct Connect" ], "correct": "B. Amazon S3 Transfer Acceleration", "explanation": "Explanation certkingdom. 31 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Tran sfer Acceleration leverages Amazon CloudFront's globally distributed AWS Edge Locations. As data ar rives at an AWS Edge Location, data is routed to yo ur Amazon S3 bucket over an optimized network path. FTP is incorrect because the File Transfer Protocol does not guarantee fast throughput and consistent, fast certkingdom. data transfer. AWS Direct Connect is incorrect because you have us ers all around the world and not just on your on- certkingdom. premises data center. Direct Connect would be too c ostly and is definitely not suitable for this purpo se. Using CloudFront Origin Access Identity is incorrec t because this is a feature which ensures that only CloudFront can serve S3 content. It does not increa se throughput and ensure fast delivery of content t o your customers. certkingdom.", "references": "certkingdom. http://docs.aws.amazon.com/AmazonS3/latest/dev/tran sfer-acceleration.html Check out this Amazon S3 Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-s3/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: certkingdom. https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: certkingdom. 32 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/comparison-of-aws-service s/" }, { "question": ": In Amazon EC2, you can manage your instances from t he moment you launch them up to their termination. You can flexibly control your computing costs by ch anging the EC2 instance state. Which of the followi ng statements is true regarding EC2 billing? (Select T WO.)", "options": [ "A. You will be billed when your Reserved instance is in terminated state.", "B. You will be billed when your Spot instance is preparing to stop with a stopping state.", "C. You will not be billed for any instance usage while an instance is not in the running state.", "D. You will be billed when your On-Demand instanc e is in pending state." ], "correct": "", "explanation": "Explanation By working with Amazon EC2 to manage your instances from the moment you launch them through their termination, you ensure that your customers have th e best possible experience with the applications or sites certkingdom. that you host on your instances. The following illu stration represents the transitions between instanc e states. Notice that you can't stop and start an instance st ore-backed instance: certkingdom. certkingdom. certkingdom. certkingdom. certkingdom. Below are the valid EC2 lifecycle instance states: 33 of 137 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam pending - The instance is preparing to enter the ru nning state. An instance enters the pending state w hen it launches for the first time, or when it is restarte d after being in the stopped state. running - The instance is running and ready for use . stopping - The instance is preparing to be stopped. Take note that you will not billed if it is prepar ing to stop however, you will still be billed if it is jus t preparing to hibernate. stopped - The instance is shut down and cannot be used. The instance can be restarted a t any time. shutting-down - The instance is preparing to be ter minated. terminated - The instance has been permanently dele ted and cannot be restarted. Take note that Reserve d Instances that applied to terminated instances are still billed until the end of their term according to their payment option. The option that says: You will be billed when your On-Demand instance is preparing to hibernate with a stopping state is correct because when the instan ce state is stopping, you will not billed if it is preparing to stop however, you will still be billed if it is just preparing to hibernate. The option that says: You will be billed when your Reserved instance is in terminated state is correct because Reserved Instances that applied to terminat ed instances are still billed until the end of thei r term according to their payment option. I actually raise d a pull-request to Amazon team about the billing conditions for Reserved Instances, which has been a pproved and reflected on your official AWS certkingdom. Documentation: https://github.com/awsdocs/amazon-ec 2-user-guide/pull/45 The option that says: You will be billed when your On-Demand instance is in pending state is incorrect certkingdom. because you will not be billed if your instance is in pending state. The option that says: You will be billed when your Spot instance is preparing to stop with a stopping state is incorrect because you will not be billed i f your instance is preparing to stop with a stoppin g state. certkingdom. The option that says: You will not be billed for an y instance usage while an instance is not in the running state is incorrect because the statement is not entirely true. You can still be billed if your instance certkingdom. is preparing to hibernate with a stopping state. References: https://github.com/awsdocs/amazon-ec2-user-guide/pu ll/45 certkingdom. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-lifecycle.html certkingdom. Check out this Amazon EC2 Cheat Sheet: 34 of 137 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A Solutions Architect for a global news company is configuring a fleet of EC2 instances in a subnet th at currently is in a VPC with an Internet gateway atta ched. All of these EC2 instances can be accessed fr om the Internet. The architect launches another subnet and deploys an EC2 instance in it, however, the ar chitect is not able to access the EC2 instance from the Int ernet. What could be the possible reasons for this issue? (Select TWO.)", "options": [ "A. The route table is not configured properly to send traffic from the EC2 instance to the", "B. The Amazon EC2 instance does not have a public IP address associated with it.", "C. The Amazon EC2 instance is not a member of the same Auto Scaling group.", "D. The Amazon EC2 instance does not have an attac hed Elastic Fabric Adapter (EFA)." ], "correct": "", "explanation": "Explanation certkingdom. Your VPC has an implicit router and you use route t ables to control where network traffic is directed. Each subnet in your VPC must be associated with a route table, which controls the routing for the subnet (s ubnet route table). You can explicitly associate a subnet with a particular route table. Otherwise, the subn et is certkingdom. implicitly associated with the main route table. A subnet can only be associated with one route tabl e at a time, but you can associate multiple subnets with the same subnet route table. You can optionally ass ociate a route table with an internet gateway or a virtual certkingdom. private gateway (gateway route table). This enables you to specify routing rules for inbound traffic t hat enters your VPC through the gateway certkingdom. Be sure that the subnet route table also has a rout e entry to the internet gateway. If this entry does n't exist, the instance is in a private subnet and is inaccess ible from the internet. In cases where your EC2 instance cannot be accessed from the Internet (or vice versa), you usually hav e to check two things: certkingdom. - Does it have an EIP or public IP address? certkingdom. - Is the route table properly configured? 35 of 137 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Below are the correct answers: - Amazon EC2 instance does not have a public IP add ress associated with it. - The route table is not configured properly to sen d traffic from the EC2 instance to the Internet through the Internet gateway. The option that says: The Amazon EC2 instance is no t a member of the same Auto Scaling group is incorrect since Auto Scaling Groups do not affect I nternet connectivity of EC2 instances. The option that says: The Amazon EC2 instance doesn 't have an attached Elastic Fabric Adapter (EFA) is incorrect because Elastic Fabric Adapter i s just a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the applic ation performance of an on-premises HPC cluster, wi th the scalability, flexibility, and elasticity provid ed by AWS. However, this component is not required in order for your EC2 instance to access the public In ternet. The option that says: The route table is not config ured properly to send traffic from the EC2 instance to the Internet through the customer gateway (CGW) is incorrect since CGW is used when you are setting up a VPN. The correct gateway should be an Internet gateway. References: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Scenario2.html 36 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Route_Tables.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company has clients all across the globe that acc ess product files stored in several S3 buckets, whi ch are behind each of their own CloudFront web distributio ns. They currently want to deliver their content to a specific client, and they need to make sure that on ly that client can access the data. Currently, all of their clients can access their S3 buckets directly using an S3 URL or through their CloudFront distribution. The Solutions Architect must serve the private content via CloudFront only, to secure the distribution of files. Which combination of actions should the Architect i mplement to meet the above requirements? (Select TWO.)", "options": [ "A. Use S3 pre-signed URLs to ensure that only the ir client can access the files. Remove", "B. Use AWS App Mesh to ensure that only their cli ent can access the files.", "C. Restrict access to files in the origin by crea ting an origin access identity (OAI) and give it", "D. Use AWS Cloud Map to ensure that only their cl ient can access the files." ], "correct": "", "explanation": "Explanation Many companies that distribute content over the Int ernet want to restrict access to documents, busines s data, certkingdom. media streams, or content that is intended for sele cted users, for example, users who have paid a fee. To securely serve this private content by using CloudF ront, you can do the following: certkingdom. - Require that your users access your private conte nt by using special CloudFront signed URLs or signe d cookies. - Require that your users access your Amazon S3 con tent by using CloudFront URLs, not Amazon S3 certkingdom. URLs. Requiring CloudFront URLs isn't necessary, bu t it is recommended to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies. You can do this by setting up an o rigin access identity (OAI) for your Amazon S3 bucket. Yo u can also configure the custom headers for a priva te certkingdom. HTTP server or an Amazon S3 bucket configured as a website endpoint. 37 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific objec t to your bucket, but you don't require them to hav e AWS security credentials or permissions. You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre- signed object URL without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an object. certkingdom. Hence, the correct answers are: - Restrict access to files in the origin by creatin g an origin access identity (OAI) and give it certkingdom. permission to read the files in the bucket. - Require the users to access the private content b y using special CloudFront signed URLs or signed cookies. certkingdom. The option that says: Use AWS App Mesh to ensure th at only their client can access the files is incorrect because AWS App Mesh is just a service me sh that provides application-level networking to certkingdom. make it easy for your services to communicate with each other across multiple types of compute infrastructure. The option that says: Use AWS Cloud Map to ensure t hat only their client can access the files is incorrect because AWS Cloud Map is simply a cloud r esource discovery service that enables you to name certkingdom. your application resources with custom names and au tomatically update the locations of your dynamicall y changing resources. certkingdom. 38 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Use S3 pre-signed URLs to ens ure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else is incorrect. Although this could be a valid solution, it doesn't satisfy the r equirement to serve the private content via CloudFr ont only to secure the distribution of files. A better solut ion is to set up an origin access identity (OAI) th en use Signed URL or Signed Cookies in your CloudFront web distribution. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/PrivateContent.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Pre signedUrlUploadObject.html Check out this Amazon CloudFront cheat sheet: https://tutorialsdojo.com/amazon-cloudfront/ S3 Pre-signed URLs vs CloudFront Signed URLs vs Ori gin Access Identity (OAI) https://tutorialsdojo.com/s3-pre-signed-urls-vs-clo udfront-signed-urls-vs-origin-access-identity-oai/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": certkingdom. A company plans to use a durable storage service to store on-premises database backups to the AWS clou d. To move their backup data, they need to use a servi ce that can store and retrieve objects through stan dard file storage protocols for quick recovery. certkingdom. Which of the following options will meet this requi rement?", "options": [ "A. Use Amazon EBS volumes to store all the backup data and attach it to an Amazon EC2", "B. Use AWS Snowball Edge to directly backup the d ata in Amazon S3 Glacier.", "C. Use the AWS Storage Gateway file gateway to st ore all the backup data in Amazon S3.", "D. Use the AWS Storage Gateway volume gateway to store the backup data and directly access" ], "correct": "C. Use the AWS Storage Gateway file gateway to st ore all the backup data in Amazon S3.", "explanation": "Explanation certkingdom. 39 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam File Gateway presents a file-based interface to Ama zon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gate way allows your existing file-based applications or dev ices to use secure and durable cloud storage withou t needing to be modified. With File Gateway, your con figured S3 buckets will be available as Network Fil e System (NFS) mount points or Server Message Block ( SMB) file shares. To store the backup data from on-premises to a dura ble cloud storage service, you can use File Gateway to store and retrieve objects through standard file st orage protocols (SMB or NFS). File Gateway enables your existing file-based applications, devices, and work flows to use Amazon S3, without modification. File Gateway securely and durably stores both file conte nts and metadata as objects while providing your on - premises applications low-latency access to cached data. Hence, the correct answer is: Use the AWS Storage G ateway file gateway to store all the backup data in Amazon S3. The option that says: Use the AWS Storage Gateway v olume gateway to store the backup data and directly access it using Amazon S3 API actions is i ncorrect. Although this is a possible solution, you cannot directly access the volume gateway using Amazon S3 APIs. You should use File Gateway to access your data in certkingdom. Amazon S3. The option that says: Use Amazon EBS volumes to sto re all the backup data and attached it to an certkingdom. Amazon EC2 instance is incorrect. Take note that in the scenario, you are required to store the backup data in a durable storage service. An Amazon EBS vo lume is not highly durable like Amazon S3. Also, fi le storage protocols such as NFS or SMB, are not direc tly supported by EBS. certkingdom. The option that says: Use AWS Snowball Edge to dire ctly backup the data in Amazon S3 Glacier is incorrect because AWS Snowball Edge cannot store an d retrieve objects through standard file storage protocols. Also, Snowball Edge can't directly integ rate backups to S3 Glacier. certkingdom. References: https://aws.amazon.com/storagegateway/faqs/ certkingdom. https://aws.amazon.com/s3/storage-classes/ 40 of 137 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", "references": "" }, { "question": ": A large insurance company has an AWS account that c ontains three VPCs (DEV, UAT and PROD) in the same region. UAT is peered to both PROD and DEV using a VPC peering connection. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the fo llowing options helps the company accomplish this?", "options": [ "A. Change the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them.", "B. Create a new VPC peering connection between PROD and DEV with the appropriate routes.", "C. Create a new entry to PROD in the DEV route table using the VPC peering connection as the", "D. Do nothing. Since these two VPCs are already conn ected via UAT, they already have a connection" ], "correct": "B. Create a new VPC peering connection between PROD and DEV with the appropriate routes.", "explanation": "Explanation A VPC peering connection is a networking connection between two VPCs that enables you to route traffic certkingdom. between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connecti on between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Regio n. certkingdom. certkingdom. certkingdom. certkingdom. 41 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS uses the existing infrastructure of a VPC to cr eate a VPC peering connection; it is neither a gate way nor a VPN connection and does not rely on a separat e piece of physical hardware. There is no single po int of failure for communication or a bandwidth bottlen eck. Creating a new entry to PROD in the DEV route table using the VPC peering connection as the target is incorrect because even if you configure t he route tables, the two VPCs will still be disconn ected until you set up a VPC peering connection between t hem. Changing the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them is incorrect because you cannot peer two VPCs with ove rlapping CIDR blocks. The option that says: Do nothing. Since these two V PCs are already connected via UAT, they already have a connection to each other is incorrect as tra nsitive VPC peering is not allowed hence, even thou gh DEV and PROD are both connected in UAT, these two V PCs do not have a direct connection to each other.", "references": "https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-peering.html Check out these Amazon VPC and VPC Peering Cheat Sh eets: https://tutorialsdojo.com/amazon-vpc/ https://tutorialsdojo.com/vpc-peering/ Here is a quick introduction to VPC Peering: https://youtu.be/i1A1eH8vLtk" }, { "question": ": Due to the large volume of query requests, the data base performance of an online reporting application significantly slowed down. The Solutions Architect is trying to convince her client to use Amazon RDS Read Replica for their application instead of setti ng up a Multi-AZ Deployments configuration. What are two benefits of using Read Replicas over M ulti-AZ that the Architect should point out? (Selec t TWO.)", "options": [ "A. It enhances the read performance of your prima ry database by increasing its IOPS and", "B. Allows both read and write operations on the r ead replica to complement the primary", "C. Provides synchronous replication and automatic failover in the case of Availability Zone", "D. Provides asynchronous replication and improves the performance of the primary database" ], "correct": "", "explanation": "Explanation Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create one or more replicas of a given sour ce DB Instance and serve high-volume application re ad traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone D B instances. For the MySQL, MariaDB, PostgreSQL, and Oracle data base engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance . It then uses the engines' native asynchronous replication to update the read replica whenever the re is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amaz on RDS replicates all databases in the source DB instance. 43 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam When you create a read replica for Amazon RDS for M ySQL, MariaDB, PostgreSQL, and Oracle, Amazon RDS sets up a secure communications channel using p ublic-key encryption between the source DB instance and the read replica, even when replicatin g across regions. Amazon RDS establishes any AWS security configurations such as adding security gro up entries needed to enable the secure channel. You can also create read replicas within a Region o r between Regions for your Amazon RDS for MySQL, MariaDB, PostgreSQL, and Oracle database instances encrypted at rest with AWS Key Management Service (KMS). Hence, the correct answers are: - It elastically scales out beyond the capacity con straints of a single DB instance for read-heavy dat abase workloads. - Provides asynchronous replication and improves th e performance of the primary database by taking read-heavy database workloads from it. The option that says: Allows both read and write op erations on the read replica to complement the primary database is incorrect as Read Replicas are primarily used to offload read-only operations from the primary database instance. By default, you can't do a write operation to your Read Replica. 44 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Provides synchronous replicat ion and automatic failover in the case of Availabil ity Zone service failures is incorrect as this is a ben efit of Multi-AZ and not of a Read Replica. Moreove r, Read Replicas provide an asynchronous type of repli cation and not synchronous replication. The option that says: It enhances the read performa nce of your primary database by increasing its IOPS and accelerates its query processing via AWS Global Accelerator is incorrect because Read Replicas do not do anything to upgrade or increase the read thr oughput on the primary DB instance per se, but it provides a way for your application to fetch data f rom replicas. In this way, it improves the overall performance of your entire database-tier (and not j ust the primary DB instance). It doesn't increase t he IOPS nor use AWS Global Accelerator to accelerate t he compute capacity of your primary database. AWS Global Accelerator is a networking service, not rel ated to RDS, that direct user traffic to the neares t application endpoint to the client, thus reducing i nternet latency and jitter. It simply routes the tr affic to the closest edge location via Anycast. References: https://aws.amazon.com/rds/details/read-replicas/ https://aws.amazon.com/rds/features/multi-az/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ Additional tutorial - How do I make my RDS MySQL re ad replica writable? https://youtu.be/j5da6d2TIPc", "references": "" }, { "question": ": certkingdom. A major TV network has a web application running on eight Amazon T3 EC2 instances. The number of certkingdom. requests that the application processes are consist ent and do not experience spikes. To ensure that ei ght instances are running at all times, the Solutions Architect shoul d create an Auto Scaling group and distribute the l oad evenly between all instances. certkingdom. Which of the following options can satisfy the give n requirements? certkingdom.", "options": [ "A. Deploy eight EC2 instances with Auto Scaling i n one Availability Zone behind an Amazon", "B. Deploy four EC2 instances with Auto Scaling in one region and four in another region", "C. Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic", "D. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another" ], "correct": "D. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another", "explanation": "Explanation The best option to take is to deploy four EC2 insta nces in one Availability Zone and four in another availability zone in the same region behind an Amaz on Elastic Load Balancer. In this way, if one availability zone goes down, there is still another available zone that can accommodate traffic. 46 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam When the first AZ goes down, the second AZ will onl y have an initial 4 EC2 instances. This will eventu ally be scaled up to 8 instances since the solution is u sing Auto Scaling. The 110% compute capacity for the 4 servers might c ause some degradation of the service, but not a tot al outage since there are still some instances that ha ndle the requests. Depending on your scale-up configuration in your Auto Scaling group, the addit ional 4 EC2 instances can be launched in a matter o f minutes. T3 instances also have a Burstable Performance capa bility to burst or go beyond the current compute capacity of the instance to higher performance as r equired by your workload. So your 4 servers will be able to manage 110% compute capacity for a short period of time. This is the power of cloud computing versu s our on-premises network architecture. It provides e lasticity and unparalleled scalability. Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event of an Availability Zone outage in the reg ion. Hence, the correct answer is the option that s ays: Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amaz on Elastic Load Balancer. The option that says: Deploy eight EC2 instances wi th Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer is incorrect because t his architecture is not highly available. If that Availability Zone goes down then your web applicati on will be unreachable. The options that say: Deploy four EC2 instances wit h Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer and D eploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region and not across multiple regi ons. References: certkingdom. https://aws.amazon.com/elasticloadbalancing/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html certkingdom. AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 certkingdom. Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: certkingdom. https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", "references": "" }, { "question": ": certkingdom. 47 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam An aerospace engineering company recently adopted a hybrid cloud infrastructure with AWS. One of the Solutions Architect's tasks is to launch a VPC with both public and private subnets for their EC2 inst ances as well as their database instances. Which of the following statements are true regardin g Amazon VPC subnets? (Select TWO.)", "options": [ "A. Each subnet spans to 2 Availability Zones.", "B. EC2 instances in a private subnet can communic ate with the Internet only if they have an Elastic", "C. The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27", "D. Each subnet maps to a single Availability Zone ." ], "correct": "", "explanation": "Explanation A VPC spans all the Availability Zones in the regio n. After creating a VPC, you can add one or more subnets in each Availability Zone. When you create a subnet, you specify the CIDR block for the subnet , which is a subset of the VPC CIDR block. Each subne t must reside entirely within one Availability Zone and cannot span zones. Availability Zones are disti nct locations that are engineered to be isolated fr om failures in other Availability Zones. By launching instances in separate Availability Zones, you can p rotect your applications from the failure of a single loca tion. 48 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Below are the important points you have to remember about subnets: - Each subnet maps to a single Availability Zone. - Every subnet that you create is automatically ass ociated with the main route table for the VPC. - If a subnet's traffic is routed to an Internet ga teway, the subnet is known as a public subnet. 49 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: EC2 instances in a private su bnet can communicate with the Internet only if they have an Elastic IP is incorrect. EC2 instances in a private subnet can communicate with the Internet n ot just by having an Elastic IP, but also with a publi c IP address via a NAT Instance or a NAT Gateway. T ake note that there is a distinction between private an d public IP addresses. To enable communication with the Internet, a public IPv4 address is mapped to the pr imary private IPv4 address through network address translation (NAT). The option that says: The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (32 IP addresses) is incorrect beca use the allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 I P addresses) and not /27 netmask. The option that says: Each subnet spans to 2 Availa bility Zones is incorrect because each subnet must reside entirely within one Availability Zone and ca nnot span zones. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_Subnets.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-ip-addressing.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company plans to set up a cloud infrastructure in AWS. In the planning, it was discussed that you ne ed to deploy two EC2 instances that should continuously r un for three years. The CPU utilization of the EC2 instances is also expected to be stable and predict able. Which is the most cost-efficient Amazon EC2 Pricing type that is most appropriate for this scenario?", "options": [ "A. Spot instances", "B. Reserved Instances", "C. Dedicated Hosts", "D. On-Demand instances Correct Answer: B", "A. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 22 from your", "B. Configure the Network Access Control List of y our VPC to permit ingress traffic over port", "C. Use Amazon Data Lifecycle Manager.", "D. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 3389" ], "correct": "A. Configure the Security Group of the EC2 instan ce to permit ingress traffic over port 22 from your", "explanation": "Explanation When connecting to your EC2 instance via SSH, you n eed to ensure that port 22 is allowed on the securi ty group of your EC2 instance. A security group acts as a virtual firewall that co ntrols the traffic for one or more instances. When you launch an instance, you associate one or more secur ity groups with the instance. You add rules to each security group that allow traffic to or from its as sociated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with t he security group. 52 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Using Amazon Data Lifecycle Manager is incorrect be cause this is primarily used to manage the lifecycl e of your AWS resources and not to allow certain traf fic to go through. Configuring the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP is incorrect because this is not neces sary in this scenario as it was specified that you were able to connect to other EC2 instances. In addition , Network ACL is much suitable to control the traff ic that goes in and out of your entire VPC and not jus t on one EC2 instance. Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP is incorrect because this is relevant to RDP and not SSH.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ using-network-security.html Check out these AWS Comparison of Services Cheat Sh eets: https://tutorialsdojo.com/comparison-of-aws-service s/" }, { "question": ": A Solutions Architect needs to deploy a mobile appl ication that can collect votes for a popular singin g competition. Millions of users from around the worl d will submit votes using their mobile phones. Thes e votes must be collected and stored in a highly scal able and highly available data store which will be queried for real-time ranking. Which of the following combination of services shou ld the architect use to meet this requirement?", "options": [ "A. Amazon Redshift and AWS Mobile Hub", "B. Amazon Relational Database Service (RDS) and A mazon MQ", "C. Amazon Aurora and Amazon Cognito", "D. Amazon DynamoDB and AWS AppSync" ], "correct": "D. Amazon DynamoDB and AWS AppSync", "explanation": "Explanation When the word durability pops out, the first servic e that should come to your mind is Amazon S3. Since this service is not available in the answer options , we can look at the other data store available whi ch is Amazon DynamoDB. DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulat ion. You can also use AppSync with DynamoDB to make it e asy for you to build collaborative apps that keep shared data updated in real time. You just specify the data for your app with simple code statements a nd AWS AppSync manages everything needed to keep the a pp data updated in real time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to pro vide the exact data you need for your app. Amazon Redshift and AWS Mobile Hub are incorrect as Amazon Redshift is mainly used as a data warehouse and for online analytic processing (OLAP) . Although this service can be used for this scenar io, DynamoDB is still the top choice given its better d urability and scalability. 54 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon Relational Database Service (RDS) and Amazon MQ and Amazon Aurora and Amazon Cognito are possible answers in this scenario, howe ver, DynamoDB is much more suitable for simple mobile apps that do not have complicated data relationships compared with e nterprise web applications. It is stated in the sce nario that the mobile app will be used from around the wo rld, which is why you need a data storage service which can be supported globally. It would be a mana gement overhead to implement multi-region deployment for your RDS and Aurora database instanc es compared to using the Global table feature of DynamoDB. References: https://aws.amazon.com/dynamodb/faqs/ https://aws.amazon.com/appsync/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A FinTech startup deployed an application on an Ama zon EC2 instance with attached Instance Store volumes and an Elastic IP address. The server is on ly accessed from 8 AM to 6 PM and can be stopped from 6 PM to 8 AM for cost efficiency using Lambda with the script that automates this based on tags. Which of the following will occur when the EC2 inst ance is stopped and started? (Select TWO.)", "options": [ "A. The underlying host for the instance is possib ly changed.", "B. The ENI (Elastic Network Interface) is detache d.", "C. All data on the attached instance-store device s will be lost.", "D. The Elastic IP address is disassociated with t he instance." ], "correct": "", "explanation": "Explanation This question did not mention the specific type of EC2 instance, however, it says that it will be stop ped and started. Since only EBS-backed instances can be sto pped and restarted, it is implied that the instance is EBS-backed. Remember that an instance store-backed instance can only be rebooted or terminated and its data will be erased if the EC2 instance is either s topped or terminated. If you stopped an EBS-backed EC2 instance, the volu me is preserved but the data in any attached instan ce store volume will be erased. Keep in mind that an E C2 instance has an underlying physical host compute r. If the instance is stopped, AWS usually moves the i nstance to a new host computer. Your instance may s tay on the same host computer if there are no problems with the host computer. In addition, its Elastic IP address is disassociated from the instance if it is an EC2-Classic instance. Otherwise, if it is an EC 2-VPC instance, the Elastic IP address remains associated . 56 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Take note that an EBS-backed EC2 instance can have attached Instance Store volumes. This is the reason why there is an option that mentions the Instance S tore volume, which is placed to test your understan ding of this specific storage type. You can launch an EB S-backed EC2 instance and attach several Instance S tore volumes but remember that there are some EC2 Instan ce types that don't support this kind of set up. 57 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answers are: - The underlying host for the instance is possibly changed. - All data on the attached instance-store devices w ill be lost. The option that says: The ENI (Elastic Network Inte rface) is detached is incorrect because the ENI wil l stay attached even if you stopped your EC2 instance . The option that says: The Elastic IP address is dis associated with the instance is incorrect because t he EIP will actually remain associated with your insta nce even after stopping it. The option that says: There will be no changes is i ncorrect because there will be a lot of possible ch anges in your EC2 instance once you stop and start it aga in. AWS may move the virtualized EC2 instance to another host computer; the instance may get a new p ublic IP address, and the data in your attached ins tance store volumes will be deleted. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-lifecycle.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ComponentsAMIs.html#storage-for-the-root- device Check out this Amazon EC2 Cheat Sheet: 58 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", "references": "" }, { "question": ":A media company recently launched their newly creat ed web application. Many users tried to visit the website, but they are receiving a 503 Service Unava ilable Error. The system administrator tracked the EC2 instance status and saw the capacity is reaching it s maximum limit and unable to process all the reque sts. To gain insights from the application's data, they need to launch a real-time analytics service. Which of the following allows you to read records i n batches?", "options": [ "A. Create a Kinesis Data Stream and use AWS Lambd a to read records from the data stream.", "B. Create an Amazon S3 bucket to store the captur ed data and use Amazon Athena to analyze", "C. Create a Kinesis Data Firehose and use AWS Lam bda to read records from the data stream.", "D. Create an Amazon S3 bucket to store the captur ed data and use Amazon Redshift Spectrum" ], "correct": "A. Create a Kinesis Data Stream and use AWS Lambd a to read records from the data stream.", "explanation": "Explanation Amazon Kinesis Data Streams (KDS) is a massively sc alable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sourc es. You can use an AWS Lambda function to process recor ds in Amazon KDS. By default, Lambda invokes your function as soon as records are available in t he stream. Lambda can process up to 10 batches in e ach shard simultaneously. If you increase the number of concurrent batches per shard, Lambda still ensures in- order processing at the partition-key level. 59 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the funct ion returns a response, it stays active and waits t o process additional events. If you invoke the functi on again while the first event is being processed, Lambda initializes another instance, and the function proc esses the two events concurrently. As more events c ome in, Lambda routes them to available instances and c reates new instances as needed. When the number of requests decreases, Lambda stops unused instances t o free upscaling capacity for other functions. Since the media company needs a real-time analytics service, you can use Kinesis Data Streams to gain insights from your data. The data collected is avai lable in milliseconds. Use AWS Lambda to read recor ds in batches and invoke your function to process records from the ba tch. If the batch that Lambda reads from the stream only has one record in it, Lambda sends only one re cord to the function. Hence, the correct answer in this scenario is: Crea te a Kinesis Data Stream and use AWS Lambda to read records from the data stream. The option that says: Create a Kinesis Data Firehos e and use AWS Lambda to read records from the data stream is incorrect. Although Amazon Kinesis D ata Firehose captures and loads data in near real- time, AWS Lambda can't be set as its destination. Y ou can write Lambda functions and integrate it with Kinesis Data Firehose to request additional, custom ized processing of the data before it is sent downs tream. However, this integration is primarily used for str eam processing and not the actual consumption of th e data stream. You have to use a Kinesis Data Stream in this scenario. The options that say: Create an Amazon S3 bucket to store the captured data and use Amazon Athena to analyze the data and Create an Amazon S3 bucket to store the captured data and use Amazon Redshift Spectrum to analyze the data are both inco rrect. As per the scenario, the company needs a rea l- 60 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam time analytics service that can ingest and process data. You need to use Amazon Kinesis to process the data in real-time. References: https://aws.amazon.com/kinesis/data-streams/ https://docs.aws.amazon.com/lambda/latest/dg/with-k inesis.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/503-error-classic/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/", "references": "" }, { "question": ": The media company that you are working for has a vi deo transcoding application running on Amazon EC2. Each EC2 instance polls a queue to find out which v ideo should be transcoded, and then runs a transcod ing process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. This application has a large backlog of vid eos which need to be transcoded. Your manager would like to reduce this backlog by adding more EC2 inst ances, however, these instances are only needed unt il the backlog is reduced. In this scenario, which type of Amazon EC2 instance is the most cost-effective type to use?", "options": [ "A. Spot instances", "B. Reserved instances", "C. Dedicated instances", "D. On-demand instances" ], "correct": "A. Spot instances", "explanation": "Explanation You require an instance that will be used not as a primary server but as a spare compute resource to augment the transcoding process of your application . These instances should also be terminated once th e backlog has been significantly reduced. In addition , the scenario mentions that if the current process is interrupted, the video can be transcoded by another instance based on the queuing system. This means t hat the application can gracefully handle an unexpected termination of an EC2 instance, like in the event of a Spot instance termination 61 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam when the Spot price is greater than your set maximu m price. Hence, an Amazon EC2 Spot instance is the best and cost-effective option for this scenario. Amazon EC2 Spot instances are spare compute capacit y in the AWS cloud available to you at steep discounts compared to On-Demand prices. EC2 Spot en ables you to optimize your costs on the AWS cloud and scale your application's throughput up to 10X f or the same budget. By simply selecting Spot when launching EC2 instances, you can save up-to 90% on On-Demand prices. The only difference between On- Demand instances and Spot Instances is that Spot in stances can be interrupted by EC2 with two minutes of notification when the EC2 needs the capacity back. You can specify whether Amazon EC2 should hibernate , stop, or terminate Spot Instances when they are interrupted. You can choose the interruption behavi or that meets your needs. Take note that there is no \"bid price\" anymore for Spot EC2 instances since March 2018. You simply hav e to set your maximum price instead. Reserved instances and Dedicated instances are inco rrect as both do not act as spare compute capacity. On-demand instances is a valid option but a Spot in stance is much cheaper than On-Demand. References: 62 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /spot-interruptions.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ how-spot-instances-work.html https://aws.amazon.com/blogs/compute/new-amazon-ec2 -spot-pricing Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": "A company has an On-Demand EC2 instance located in a subnet in AWS that hosts a web application. The security group attached to this EC2 instance has th e following Inbound Rules: The Route table attached to the VPC is shown below. You can establish an SSH connection into the EC2 instance from the Internet. However, you are not ab le to connect to the web server using your Chrome browser. 63 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Which of the below steps would resolve the issue?", "options": [ "A. In the Route table, add this new route entry: 10.0.0.0/27 -> local", "B. In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc", "C. In the Security Group, add an Inbound HTTP rul e.", "D. In the Security Group, remove the SSH rule." ], "correct": "C. In the Security Group, add an Inbound HTTP rul e.", "explanation": "Explanation In this particular scenario, you can already connec t to the EC2 instance via SSH. This means that ther e is no problem in the Route Table of your VPC. To fix t his issue, you simply need to update your Security Group and add an Inbound rule to allow HTTP traffic . 64 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: In the Security Group, remove the SSH rule is incorrect as doing so will not sol ve the issue. It will just disable SSH traffic that is already available. The options that say: In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc and In the Route table, add this new route entry: 10.0.0.0/27 -> local are incorrect as there is no need to chang e the Route Tables.", "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_SecurityGroups.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" }, { "question": ": A company is hosting an application on EC2 instance s that regularly pushes and fetches data in Amazon S3. Due to a change in compliance, the instances need t o be moved on a private subnet. Along with this cha nge, the company wants to lower the data transfer costs by configuring its AWS resources. How can this be accomplished in the MOST cost-effic ient manner? 65 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "A. Create an Amazon S3 gateway endpoint to enable a connection between the instances and", "B. Set up a NAT Gateway in the public subnet to c onnect to Amazon S3.", "C. Create an Amazon S3 interface endpoint to enab le a connection between the instances and", "D. Set up an AWS Transit Gateway to access Amazon S3." ], "correct": "A. Create an Amazon S3 gateway endpoint to enable a connection between the instances and", "explanation": "Explanation VPC endpoints for Amazon S3 simplify access to S3 f rom within a VPC by providing configurable and highly reliable secure connections to S3 that do no t require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3. You can use two types of VPC endpoints to access Am azon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 fr om your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Ama zon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compa tible with gateway endpoints. If you have an existi ng gateway endpoint in the VPC, you can use both types of endpoints in the same VPC. 66 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam There is no additional charge for using gateway end points. However, standard charges for data transfer and resource usage still apply. Hence, the correct answer is: Create an Amazon S3 g ateway endpoint to enable a connection between the instances and Amazon S3. The option that says: Set up a NAT Gateway in the p ublic subnet to connect to Amazon S3 is incorrect. This will enable a connection between the private E C2 instances and Amazon S3 but it is not the most c ost- efficient solution. NAT Gateways are charged on an hourly basis even for idle time. The option that says: Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3 is incorrect. This is also a possible solution but it's not the most cost-effe ctive solution. You pay an hourly rate for every provisio ned Interface endpoint. The option that says: Set up an AWS Transit Gateway to access Amazon S3 is incorrect because this service is mainly used for connecting VPCs and on-p remises networks through a central hub. References: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/privatelink-interface-endpoints.html https://docs.aws.amazon.com/vpc/latest/privatelink/ vpce-gateway.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company is hosting EC2 instances that are on non- production environment and processing non-priority batch loads, which can be interrupted at any time. What is the best instance purchasing option which c an be applied to your EC2 instances in this case? A. Spot Instances", "options": [ "B. On-Demand Capacity Reservations", "C. Reserved Instances", "D. On-Demand Instances" ], "correct": "", "explanation": "Explanation 67 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EC2 Spot instances are spare compute capacit y in the AWS cloud available to you at steep discounts compared to On-Demand prices. It can be i nterrupted by AWS EC2 with two minutes of notification when the EC2 needs the capacity back. To use Spot Instances, you create a Spot Instance r equest that includes the number of instances, the instance type, the Availability Zone, and the maxim um price that you are willing to pay per instance h our. If your maximum price exceeds the current Spot pric e, Amazon EC2 fulfills your request immediately if capacity is available. Otherwise, Amazon EC2 waits until your request can be fulfilled or until you ca ncel the request. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ using-spot-instances.html https://aws.amazon.com/ec2/spot/ Amazon EC2 Overview: 68 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://youtu.be/7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ":A Solutions Architect is working for a financial co mpany. The manager wants to have the ability to automatically transfer obsolete data from their S3 bucket to a low-cost storage system in AWS. What is the best solution that the Architect can pr ovide to them?", "options": [ "A. Use an EC2 instance and a scheduled job to tra nsfer the obsolete data from their S3 location", "B. Use Lifecycle Policies in S3 to move obsolete data to Glacier.", "C. Use CloudEndure Migration.", "D. Use Amazon SQS." ], "correct": "A. Use an EC2 instance and a scheduled job to tra nsfer the obsolete data from their S3 location", "explanation": "Explanation In this scenario, you can use lifecycle policies in S3 to automatically move obsolete data to Glacier. Lifecycle configuration in Amazon S3 enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more r ules, where each rule defines an action for Amazon S3 to apply to a group of objects. 69 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. These actions can be classified as follows: Transition actions In which you define when object s transition to another storage class. For example, certkingdom. you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLAC IER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf. The option that says: Use an EC2 instance and a sch eduled job to transfer the obsolete data from their certkingdom. S3 location to Amazon S3 Glacier is incorrect becau se you don't need to create a scheduled job in EC2 as you can simply use the lifecycle policy in S3. The option that says: Use Amazon SQS is incorrect a s SQS is not a storage service. Amazon SQS is primarily used to decouple your applications by que ueing the incoming requests of your application. The option that says: Use CloudEndure Migration is incorrect because this service is just a highly certkingdom. automated lift-and-shift (rehost) solution that sim plifies, expedites, and reduces the cost of migrati ng applications to AWS. You cannot use this to automat ically transition your S3 objects to a cheaper stor age class. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/obje ct-lifecycle-mgmt.html certkingdom. https://aws.amazon.com/blogs/aws/archive-s3-to-glac ier/ 70 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A manufacturing company has EC2 instances running i n AWS. The EC2 instances are configured with Auto Scaling. There are a lot of requests being los t because of too much load on the servers. The Auto Scaling is launching new EC2 instances to take the load accordingly yet, there are still some requests that certkingdom. are being lost. Which of the following is the MOST suitable solutio n that you should implement to avoid losing recentl y submitted requests? A. Set up Amazon Aurora Serverless for on-demand, auto-scaling configuration of your EC2 Instances and also enable Amazon Aurora Parallel Qu ery feature for faster analytical queries over your current data. certkingdom.", "options": [ "B. Use an Amazon SQS queue to decouple the applic ation components and scale-out the EC2", "C. Use larger instances for your application with an attached Elastic Fabric Adapter (EFA).", "D. Replace the Auto Scaling group with a cluster placement group to achieve a low-latency" ], "correct": "B. Use an Amazon SQS queue to decouple the applic ation components and scale-out the EC2", "explanation": "Explanation Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that makes it easy to decouple and scale microservices, distributed syste ms, and serverless applications. Building applicati ons from individual components that each perform a disc rete function improves scalability and reliability, and is best practice design for modern applications. SQ S makes it simple and cost-effective to decouple an d certkingdom. coordinate the components of a cloud application. U sing SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available. certkingdom. 71 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The number of messages in your Amazon SQS queue doe s not solely define the number of instances needed. In fact, the number of instances in the fle et can be driven by multiple factors, including how long it takes to process a message and the acceptable amoun t of latency (queue delay). The solution is to use a backlog per instance metri c with the target value being the acceptable backlo g per instance to maintain. You can calculate these numbe rs as follows: Backlog per instance: To determine your backlog per instance, start with the Amazon SQS metric ApproximateNumberOfMessages to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number b y the fleet's running capacity, which for an Auto S caling group is the number of instances in the InService s tate, to get the backlog per instance. 72 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Acceptable backlog per instance: To determine your target value, first calculate what your application can accept in terms of latency. Then, take the acceptab le latency value and divide it by the average time that an EC2 instance takes to process a message. To illustrate with an example, let's say that the c urrent ApproximateNumberOfMessages is 1500 and the fleet's running capacity is 10. If the average proc essing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds then the a cceptable backlog per instance is 10 / 0.1, which e quals 100. This means that 100 is the target value for your target tracking policy. Because the backlog per instance i s currently at 150 (1500 / 10), your fleet scales out by five instances to maintain proportion to the ta rget value. Hence, the correct answer is: Use an Amazon SQS que ue to decouple the application components and certkingdom. scale-out the EC2 instances based upon the Approxim ateNumberOfMessages metric in Amazon certkingdom. CloudWatch. Replacing the Auto Scaling group with a cluster pla cement group to achieve a low-latency network performance necessary for tightly-coupled node-to-n ode communication is incorrect because although it is true that a cluster placement group allows you t o achieve a low-latency network performance, you st ill need to use Auto Scaling for your architecture to a dd more EC2 instances. certkingdom. certkingdom. Using larger instances for your application with an attached Elastic Fabric Adapter (EFA) is incorrectbecause using a larger EC2 instance would not preve nt data from being lost in case of a larger spike. You can take advantage of the durability and elasticity of SQS to keep the messages available for consumpt ion by your instances. Elastic Fabric Adapter (EFA) is simply a network interface for Amazon EC2 instances that enables customers to run applications requirin g high levels of inter-node communications at scale on AWS. certkingdom. certkingdom. Setting up Amazon Aurora Serverless for on-demand, auto-scaling configuration of your EC2 Instances and also enabling Amazon Aurora Parallel Query feat ure for faster analytical queries over your current data is incorrect because although the Amazon Auror a Parallel Query feature provides faster analytical queries over your current data, Amazon Aurora Serve rless is an on-demand, auto-scaling configuration f or your database, and NOT for your EC2 instances. This is actually an auto-scaling configuration for your Amazon Aurora database and not for your compute ser vices. certkingdom. References: certkingdom. https://aws.amazon.com/sqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/welcome.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html certkingdom. 73 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": A travel company has a suite of web applications ho sted in an Auto Scaling group of On-Demand EC2 instances behind an Application Load Balancer that handles traffic from various web domains such as i- love-manila.com, i-love-boracay.com, i-love-cebu.co m and many others. To improve security and lessen the overall cost, you are instructed to secure the system by allowing multiple domains to serve SSL tr affic without the need to reauthenticate and reprovision your certificate everytime you add a new domain. Th is migration from HTTP to HTTPS will help improve thei r SEO and Google search ranking. Which of the following is the most cost-effective s olution to meet the above requirement?", "options": [ "A. Use a wildcard certificate to handle multiple sub-domains and different domains.", "B. Add a Subject Alternative Name (SAN) for each additional domain to your certificate.", "C. Upload all SSL certificates of the domains in the ALB using the console and bind multiple", "D. Create a new CloudFront web distribution and c onfigure it to serve HTTPS requests using" ], "correct": "C. Upload all SSL certificates of the domains in the ALB using the console and bind multiple", "explanation": "Explanation SNI Custom SSL relies on the SNI extension of the T ransport Layer Security protocol, which allows certkingdom. multiple domains to serve SSL traffic over the same IP address by including the hostname which the viewers are trying to connect to. You can host multiple TLS secured applications, eac h with its own TLS certificate, behind a single loa d balancer. In order to use SNI, all you need to do i s bind multiple certificates to the same secure lis tener on your load balancer. ALB will automatically choose t he optimal TLS certificate for each client. These features are provided at no additional charge. certkingdom. 74 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. To meet the requirements in the scenario, you can u pload all SSL certificates of the domains in the AL B using the console and bind multiple certificates to the same secure listener on your load balancer. AL B will automatically choose the optimal TLS certificate fo r each client using Server Name Indication (SNI). Hence, the correct answer is the option that says: Upload all SSL certificates of the domains in the A LB using the console and bind multiple certificates to the same secure listener on your load balancer. certkingdom. ALB will automatically choose the optimal TLS certi ficate for each client using Server Name Indication (SNI). Using a wildcard certificate to handle multiple sub -domains and different domains is incorrect because a wildcard certificate can only handle mult iple sub-domains but not different domains. Adding a Subject Alternative Name (SAN) for each ad ditional domain to your certificate is incorrect certkingdom. because although using SAN is correct, you will sti ll have to reauthenticate and reprovision your cert ificate every time you add a new domain. One of the require ments in the scenario is that you should not have t o reauthenticate and reprovision your certificate hen ce, this solution is incorrect. 75 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Create a new CloudFront web d istribution and configure it to serve HTTPS requests using dedicated IP addresses in order to a ssociate your alternate domain names with a dedicated IP address in each CloudFront edge locati on is incorrect because although it is valid to use dedicated IP addresses to meet this requirement, th is solution is not cost-effective. Remember that if you configure CloudFront to serve HTTPS requests using dedicated IP addresses, you incur an additional monthly charge. The charge begins when you associat e your SSL/TLS certificate with your CloudFront distribution. You can just simply upload the certif icates to the ALB and use SNI to handle multiple domains in a cost-effective manner. References: https://aws.amazon.com/blogs/aws/new-application-lo ad-balancer-sni/ https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/cnames-https-dedicated-ip-or- sni.html#cnames-https-dedicated-ip https://docs.aws.amazon.com/elasticloadbalancing/la test/application/create-https-listener.html certkingdom. Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ SNI Custom SSL vs Dedicated IP Custom SSL: https://tutorialsdojo.com/sni-custom-ssl-vs-dedicat ed-ip-custom-ssl/ certkingdom. Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": certkingdom. A new online banking platform has been re-designed to have a microservices architecture in which complex applications are decomposed into smaller, i ndependent services. The new platform is using Docker considering that application containers are optimal for running small, decoupled services. The new solution should remove the need to provision and ma nage servers, let you specify and pay for resources per application, and improve security through applicati on isolation by design. Which of the following is the MOST suitable service to use to migrate this new platform to AWS? certkingdom.", "options": [ "A. Amazon EBS", "B. Amazon EFS", "C. Amazon EKS", "D. AWS Fargate" ], "correct": "D. AWS Fargate", "explanation": "Explanation AWS Fargate is a serverless compute engine for cont ainers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernet es Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate remove s the need to provision and manage servers, lets yo u specify and pay for resources per application, and improves security through application isolation by design. Fargate allocates the right amount of compute, elim inating the need to choose instances and scale clus ter capacity. You only pay for the resources required t o run your containers, so there is no over-provisio ning and paying for additional servers. Fargate runs eac h task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This e nables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate. certkingdom. certkingdom. certkingdom. Hence, the correct answer is: AWS Fargate. Amazon EKS is incorrect because this is more suitab le to run the Kubernetes management infrastructure and not Docker. It does not remove the need to prov ision and manage servers nor let you specify and pa y certkingdom. for resources per application, unlike AWS Fargate. 77 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EFS is incorrect because this is a file syst em for Linux-based workloads for use with AWS Cloud services and on-premises resources. Amazon EBS is incorrect because this is primarily u sed to provide persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. References: https://aws.amazon.com/fargate/ https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/ECS_GetStarted_Fargate.html Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/", "references": "" }, { "question": ": A company has established a dedicated network conne ction from its on-premises data center to AWS Cloud certkingdom. using AWS Direct Connect (DX). The core network ser vices, such as the Domain Name System (DNS) service and Active Directory services, are all host ed on-premises. The company has new AWS accounts that will also require consistent and dedicated acc ess to these network services. Which of the following can satisfy this requirement with the LEAST amount of operational overhead and in a cost-effective manner? certkingdom. A. Create a new AWS VPN CloudHub. Set up a Virtua l Private Network (VPN) connection for additional AWS accounts.", "options": [ "B. Set up a new Direct Connect gateway and integr ate it with the existing Direct Connect", "C. Set up another Direct Connect connection for e ach and every new AWS account that will be", "D. Create a new Direct Connect gateway and integr ate it with the existing Direct Connect" ], "correct": "D. Create a new Direct Connect gateway and integr ate it with the existing Direct Connect", "explanation": "Explanation certkingdom. 78 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS Transit Gateway provides a hub and spoke design for connecting VPCs and on-premises networks. You can attach all your hybrid connectivity (VPN an d Direct Connect connections) to a single Transit Gateway consolidating and controlling your organiza tion's entire AWS routing configuration in one plac e. It also controls how traffic is routed among all th e connected spoke networks using route tables. This hub and spoke model simplifies management and reduces o perational costs because VPCs only connect to the Transit Gateway to gain access to the connected net works. By attaching a transit gateway to a Direct Connect gateway using a transit virtual interface, you can manage a single connection for multiple VPCs or VPN s that are in the same AWS Region. You can also advertise prefixes from on-premises to AWS and from AWS to on-premises. The AWS Transit Gateway and AWS Direct Connect solu tion simplify the management of connections between an Amazon VPC and your networks over a priv ate connection. It can also minimize network costs, improve bandwidth throughput, and provide a more re liable network experience than Internet-based connections. Hence, the correct answer is: Create a new Direct C onnect gateway and integrate it with the existing Direct Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct Connect gateway. The option that says: Set up another Direct Connect connection for each and every new AWS account that will be added is incorrect because this soluti on entails a significant amount of additional cost. Setting up a single DX connection requires a substantial bu dget and takes a lot of time to establish. It also has high 79 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam management overhead since you will need to manage a ll of the Direct Connect connections for all AWS accounts. The option that says: Create a new AWS VPN CloudHub . Set up a Virtual Private Network (VPN) connection for additional AWS accounts is incorrect because a VPN connection is not capable of providing consistent and dedicated access to the on -premises network services. Take note that a VPN connection traverses the public Internet and doesn' t use a dedicated connection. The option that says: Set up a new Direct Connect g ateway and integrate it with the existing Direct Connect connection. Configure a VPC peering connect ion between AWS accounts and associate it with Direct Connect gateway is incorrect because VP C peering is not supported in a Direct Connect connection. VPC peering does not support transitive peering relationships. References: https://docs.aws.amazon.com/directconnect/latest/Us erGuide/direct-connect-transit-gateways.html https://docs.aws.amazon.com/whitepapers/latest/aws- vpc-connectivity-options/aws-direct-connect-aws- transit-gateway.html https://aws.amazon.com/blogs/networking-and-content -delivery/integrating-sub-1-gbps-hosted- connections-with-aws-transit-gateway/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", "references": "" }, { "question": ": A company is storing its financial reports and regu latory documents in an Amazon S3 bucket. To comply with the IT audit, they tasked their Solutions Arch itect to track all new objects added to the bucket as well as the removed ones. It should also track whether a versioned object is permanently deleted. The Archi tect must configure Amazon S3 to publish notifications f or these events to a queue for post-processing and to an Amazon SNS topic that will notify the Operations te am. Which of the following is the MOST suitable solutio n that the Architect should implement? A. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQ S and SNS. 80 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "B. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification", "C. Create a new Amazon SNS topic and Amazon MQ. A dd an S3 event notification", "D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification" ], "correct": "D. Create a new Amazon SNS topic and Amazon SQS q ueue. Add an S3 event notification", "explanation": "Explanation The Amazon S3 notification feature enables you to r eceive notifications when certain events happen in your bucket. To enable notifications, you must firs t add a notification configuration that identifies the events you want Amazon S3 to publish and the destin ations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket. Amazon S3 provides an API for you to manage this su bresource. Amazon S3 event notifications typically deliver eve nts in seconds but can sometimes take a minute or longer. If two writes are made to a single non-vers ioned object at the same time, it is possible that only a single event notification will be sent. If you want to ensure that an event notification is sent for e very successful write, you can enable versioning on your bucket. With versioning, every successful write wi ll create a new version of your object and will also s end an event notification. 81 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon S3 can publish notifications for the followi ng events: 1. New object created events 2. Object removal events 3. Restore object events 4. Reduced Redundancy Storage (RRS) object lost eve nts 5. Replication events 82 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon S3 supports the following destinations where it can publish events: 1. Amazon Simple Notification Service (Amazon SNS) topic 2. Amazon Simple Queue Service (Amazon SQS) queue 3. AWS Lambda If your notification ends up writing to the bucket that triggers the notification, this could cause an execution loop. For example, if the bucket triggers a Lambda function each time an object is uploaded and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects. Hence, the correct answers is: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to p ublish s3:ObjectCreated:* and s3:ObjectRemoved:Delete event types to SQS and SNS. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectAdd ed:* and s3:ObjectRemoved:* event types to 83 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam SQS and SNS is incorrect. There is no s3:ObjectAdde d:* type in Amazon S3. You should add an S3 event notification configuration on the bucket to publish events of the s3:ObjectCreated:* type instead. Mor eover, Amazon S3 does support Amazon MQ as a destination t o publish events. The option that says: Create a new Amazon SNS topic and Amazon SQS queue. Add an S3 event notification configuration on the bucket to publish s3:ObjectCreated:* and ObjectRemoved:DeleteMarkerCreated event types to SQ S and SNS is incorrect because the s3:ObjectRemoved:DeleteMarkerCreated type is only t riggered when a delete marker is created for a versioned object and not wh en an object is deleted or a versioned object is permanently deleted. The option that says: Create a new Amazon SNS topic and Amazon MQ. Add an S3 event notification configuration on the bucket to publish s3:ObjectCre ated:* and ObjectRemoved:DeleteMarkerCreated event types to SQ S and SNS is incorrect because Amazon S3 does public event messages to Amazon MQ. You should use an Amazon SQS instead. In addition, the s3:ObjectRemoved:DeleteMarkerCreated type is only t riggered when a delete marker is created for a versioned object. Remember that the scenario asked to publish events when an object is deleted or a versioned object is permanently deleted. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html https://docs.aws.amazon.com/AmazonS3/latest/dev/way s-to-add-notification-config-to-bucket.html https://aws.amazon.com/blogs/aws/s3-event-notificat ion/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ Amazon SNS Overview: https://www.youtube.com/watch?v=ft5R45lEUJ8", "references": "" }, { "question": ": A data analytics company is setting up an innovativ e checkout-free grocery store. Their Solutions Arch itect developed a real-time monitoring application that u ses smart sensors to collect the items that the cus tomers are getting from the grocery's refrigerators and sh elves then automatically deduct it from their accou nts. The company wants to analyze the items that are fre quently being bought and store the results in S3 fo r durable storage to determine the purchase behavior of its customers. 84 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam What service must be used to easily capture, transf orm, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk? Amazon Kinesis Data Firehose Amazon Redshift Amazon Kinesis Amazon SQS", "options": [ "A. B.", "C.", "D." ], "correct": "", "explanation": "Explanation Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near re al-time analytics with existing business intelligen ce tools and dashboards you are already using today. It is a fully managed service that automatically sc ales to match the throughput of your data and requi res no ongoing administration. It can also batch, compress , and encrypt the data before loading it, minimizin g the amount of storage used at the destination and incre asing security. In the diagram below, you gather the data from your smart refrigerators and use Kinesis Data firehouse to prepare and load the data. S3 will be used as a met hod of durably storing the data for analytics and t he eventual ingestion of data for output using analyti cal tools. 85 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can use Amazon Kinesis Data Firehose in conjunc tion with Amazon Kinesis Data Streams if you need to implement real-time processing of streaming big data. Kinesis Data Streams provides an ordering of records, as well as the ability to read and/or repl ay records in the same order to multiple Amazon Kin esis Applications. The Amazon Kinesis Client Library (KC L) delivers all records for a given partition key t o the same record processor, making it easier to build mu ltiple applications reading from the same Amazon Kinesis data stream (for example, to perform counti ng, aggregation, and filtering). Amazon Simple Queue Service (Amazon SQS) is differe nt from Amazon Kinesis Data Firehose. SQS offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distri buted application components and helps you build applications in which messages are processed indepe ndently (with message-level ack/fail semantics), su ch as automated workflows. Amazon Kinesis Data Firehos e is primarily used to load streaming data into dat a stores and analytics tools. Hence, the correct answer is: Amazon Kinesis Data F irehose. Amazon Kinesis is incorrect because this is the str eaming data platform of AWS and has four distinct services under it: Kinesis Data Firehose, Kinesis D ata Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. For the specific use case j ust as asked in the scenario, use Kinesis Data Fire hose. Amazon Redshift is incorrect because this is mainly used for data warehousing making it simple and cos t- effective to analyze your data across your data war ehouse and data lake. It does not meet the requirem ent of being able to load and stream data into data stores for analytics. You have to use Kinesis Data Fireho se instead. Amazon SQS is incorrect because you can't capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk using this service. You have to use Kinesis Data Firehose instead. References: https://aws.amazon.com/kinesis/data-firehose/ https://aws.amazon.com/kinesis/data-streams/faqs/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", "references": "" }, { "question": ":A company is using Amazon VPC that has a CIDR block of 10.31.0.0/27< that is connected to the on- premises data center. There was a requirement to cr eate a Lambda function that will process massive 86 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam amounts of cryptocurrency transactions every minute and then store the results to EFS. After setting u p the serverless architecture and connecting the Lambda f unction to the VPC, the Solutions Architect noticed an increase in invocation errors with EC2 error types such as EC2ThrottledException at certain times of t he day. Which of the following are the possible causes of t his issue? (Select TWO.)", "options": [ "A. You only specified one subnet in your Lambda f unction configuration. That single subnet", "B. The attached IAM execution role of your functi on does not have the necessary permissions", "C. The associated security group of your function does not allow outbound connections.", "D. Your VPC does not have sufficient subnet ENIs or subnet IPs." ], "correct": "", "explanation": "Explanation You can configure a function to connect to a virtua l private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a priv ate network for resources such as databases, cache instances, or internal services. Connect your funct ion to the VPC to access private resources during execution. AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VP C, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (E NIs) that enable your function to connect securely to other resources within your private VPC. Lambda functions cannot connect directly to a VPC w ith dedicated instance tenancy. To connect to resources in a dedicated VPC, peer it to a second V PC with default tenancy. Your Lambda function automatically scales based on the number of events it processes. If your Lambda function accesses a VPC, you must make sure that yo ur VPC has sufficient ENI capacity to support the scale requirements of your Lambda function. It is a lso recommended that you specify at least one subne t in each Availability Zone in your Lambda function conf iguration. By specifying subnets in each of the Availability Z ones, your Lambda function can run in another Availability Zone if one goes down or runs out of I P addresses. If your VPC does not have sufficient E NIs or subnet IPs, your Lambda function will not scale as requests increase, and you will see an increase in 87 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam invocation errors with EC2 error types like EC2Thro ttledException. For asynchronous invocation, if you see an increase in errors without corresponding Clo udWatch Logs, invoke the Lambda function synchronously in the console to get the error respo nses. Hence, the correct answers for this scenario are: - You only specified one subnet in your Lambda func tion configuration. That single subnet runs out of available IP addresses and there is no other sub net or Availability Zone which can handle the peak load. - Your VPC does not have sufficient subnet ENIs or subnet IPs. 88 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Your VPC does not have a NAT gateway is incorrect because an issue in the NAT Gateway is unlikely to cause a request throttling i ssue or produce an EC2ThrottledException error in Lambda. As per the scenario, the issue is happening only at certain times of the day, which means that the issue is only intermittent and the function works a t other times. We can also conclude that an availab ility issue is not an issue since the application is alre ady using a highly available NAT Gateway and not ju st a NAT instance. 89 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: The associated security group of your function does not allow outbound connections is incorrect because if the associated security group does not allow outbound connections then the Lambda function will not work at all in the fir st place. Remember that as per the scenario, the is sue only happens intermittently. In addition, Internet traffic restrictions do not usually produce EC2ThrottledException errors. The option that says: The attached IAM execution ro le of your function does not have the necessary permissions to access the resources of your VPC is incorrect because just as what is explained above, the issue is intermittent and thus, the IAM execution r ole of the function does have the necessary permiss ions to access the resources of the VPC since it works a t those specific times. In case the issue is indeed caused by a permission problem then an EC2AccessDeniedExce ption the error would most likely be returned and not an EC2ThrottledException error. References: https://docs.aws.amazon.com/lambda/latest/dg/vpc.ht ml https://aws.amazon.com/premiumsupport/knowledge-cen ter/internet-access-lambda-function/ https://aws.amazon.com/premiumsupport/knowledge-cen ter/lambda-troubleshoot-invoke-error-502-500/ Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", "references": "" }, { "question": ": A tech startup is launching an on-demand food deliv ery platform using Amazon ECS cluster with an AWS Fargate serverless compute engine and Amazon Aurora . It is expected that the database read queries wil l significantly increase in the coming weeks ahead. A Solutions Architect recently launched two Read Replicas to the database cluster to improve the pla tform's scalability. Which of the following is the MOST suitable configu ration that the Architect should implement to load balance all of the incoming read requests equally t o the two Read Replicas?", "options": [ "A. Use the built-in Reader endpoint of the Amazon Aurora database.", "B. Enable Amazon Aurora Parallel Query.", "C. Create a new Network Load Balancer to evenly d istribute the read queries to the Read Replicas of the Amazon Aurora database.", "D. Use the built-in Cluster endpoint of the Amazo n Aurora database." ], "correct": "A. Use the built-in Reader endpoint of the Amazon Aurora database.", "explanation": "Explanation Amazon Aurora typically involves a cluster of DB in stances instead of a single instance. Each connecti on is handled by a specific DB instance. When you conn ect to an Aurora cluster, the hostname and port tha t you specify point to an intermediate handler called an endpoint. Aurora uses the endpoint mechanism to abstract these connections. Thus, you don't have to hardcode all the hostnames or write your own logic for load-balancing and rerouting connections when some DB instances aren't available. For certain Aurora tasks, different instances or gr oups of instances perform different roles. For exam ple, the primary instance handles all data definition la nguage (DDL) and data manipulation language (DML) statements. Up to 15 Aurora Replicas handle read-on ly query traffic. Using endpoints, you can map each connection to the appropriate instance or group of instances based o n your use case. For example, to perform DDL statemen ts you can connect to whichever instance is the primary instance. To perform queries, you can conne ct to the reader endpoint, with Aurora automaticall y performing load-balancing among all the Aurora Repl icas. For clusters with DB instances of different capacities or configurations, you can connect to cu stom endpoints associated with different subsets of DB instances. For diagnosis or tuning, you can connect to a specific instance endpoint to examine details about a specific DB instance. A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections t o the DB cluster. Use the reader endpoint for read op erations, such as queries. By processing those stat ements on the read-only Aurora Replicas, this endpoint red uces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simulta neous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB clus ter has one reader endpoint. If the cluster contains one or more Aurora Replicas , the reader endpoint load-balances each connection request among the Aurora Replicas. In that case, yo u can only perform read-only statements such as 91 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam SELECT in that session. If the cluster only contain s a primary instance and no Aurora Replicas, the re ader endpoint connects to the primary instance. In that case, you can perform write operations through the endpoint. Hence, the correct answer is to use the built-in Re ader endpoint of the Amazon Aurora database. The option that says: Use the built-in Cluster endp oint of the Amazon Aurora database is incorrect because a cluster endpoint (also known as a writer endpoint) simply connects to the current primary DB instance for that DB cluster. This endpoint can per form write operations in the database such as DDL statements, which is perfect for handling productio n traffic but not suitable for handling queries for reporting since there will be no write database ope rations that will be sent. The option that says: Enable Amazon Aurora Parallel Query is incorrect because this feature simply enables Amazon Aurora to push down and distribute t he computational load of a single query across thousands of CPUs in Aurora's storage layer. Take n ote that it does not load balance all of the incomi ng read requests equally to the two Read Replicas. Wit h Parallel Query, query processing is pushed down t o the Aurora storage layer. The query gains a large a mount of computing power, and it needs to transfer far less data over the network. In the meantime, the Au rora database instance can continue serving transac tions with much less interruption. This way, you can run transactional and analytical workloads alongside ea ch other in the same Aurora database, while maintainin g high performance. The option that says: Create a new Network Load Bal ancer to evenly distribute the read queries to the Read Replicas of the Amazon Aurora database is inco rrect because a Network Load Balancer is not the suitable service/component to use for this requirem ent since an NLB is primarily used to distribute tr affic to servers, not Read Replicas. You have to use the built-in Reader endpoint of the Amazon Aurora datab ase instead. References: https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.Endpoints.html https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Aurora.Overview.html https://aws.amazon.com/rds/aurora/parallel-query/ Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/ 92 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company is using multiple AWS accounts that are c onsolidated using AWS Organizations. They want to copy several S3 objects to another S3 bucket that b elonged to a different AWS account which they also own. The Solutions Architect was instructed to set up the necessary permissions for this task and to e nsure that the destination account owns the copied object s and not the account it was sent from. How can the Architect accomplish this requirement?", "options": [ "A. Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that allows", "B. Enable the Requester Pays feature in the sourc e S3 bucket. The fees would be waived", "C. Configure cross-account permissions in S3 by c reating an IAM customer-managed policy", "D. Connect the two S3 buckets from two different AWS accounts to Amazon WorkDocs. Set up" ], "correct": "", "explanation": "Explanation By default, an S3 object is owned by the account th at uploaded the object. That's why granting the destination account the permissions to perform the cross-account copy makes sure that the destination owns the copied objects. You can also change the ownersh ip of an object by changing its access control list (ACL) to bucket-owner-full-control. However, object ACLs can be difficult to manage for multiple objects, so it's a best practice to grant programmatic cross-account permissions to the desti nation account. Object ownership is important for managing permissions using a bucket policy. For a b ucket policy to apply to an object in the bucket, t he object must be owned by the account that owns the b ucket. You can also manage object permissions using the object's ACL. However, object ACLs can be diffi cult to manage for multiple objects, so it's best practice to use the bucket policy as a centralized method for setting permissions. 93 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam To be sure that a destination account owns an S3 ob ject copied from another account, grant the destina tion account the permissions to perform the cross-accoun t copy. Follow these steps to configure cross-accou nt permissions to copy objects from a source bucket in Account A to a destination bucket in Account B: - Attach a bucket policy to the source bucket in Ac count A. - Attach an AWS Identity and Access Management (IAM ) policy to a user or role in Account B. - Use the IAM user or role in Account B to perform the cross-account copy. Hence, the correct answer is: Configure cross-accou nt permissions in S3 by creating an IAM customer- managed policy that allows an IAM user or role to c opy objects from the source bucket in one account to the destination bucket in the other acco unt. Then attach the policy to the IAM user or role that you want to use to copy objects between accoun ts. The option that says: Enable the Requester Pays fea ture in the source S3 bucket. The fees would be waived through Consolidated Billing since both AWS accounts are part of AWS Organizations is incorrect because the Requester Pays feature is pri marily used if you want the requester, instead of t he bucket owner, to pay the cost of the data transfer request and download from the S3 bucket. This solut ion 94 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam lacks the necessary IAM Permissions to satisfy the requirement. The most suitable solution here is to configure cross-account permissions in S3. The option that says: Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that allows an IAM user or role to copy objects fro m the source bucket in one account to the destination bucket in the other account is incorrec t because CORS simply defines a way for client web applications that are loaded in one domain to inter act with resources in a different domain, and not o n a different AWS account. The option that says: Connect the two S3 buckets fr om two different AWS accounts to Amazon WorkDocs. Set up cross-account access to integrate the two S3 buckets. Use the Amazon WorkDocs console to copy the objects from one account to the other with modified object ownership assigned to the destination account is incorrect because Amazon WorkDocs is commonly used to easily collaborate, share content, provide rich feedback, and collabora tively edit documents with other users. There is no direct way for you to integrate WorkDocs and an Ama zon S3 bucket owned by a different AWS account. A better solution here is to use cross-account permis sions in S3 to meet the requirement. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/exa mple-walkthroughs-managing-access- example2.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/copy-s3-objects-account/ https://aws.amazon.com/premiumsupport/knowledge-cen ter/cross-account-access-s3/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A document sharing website is using AWS as its clou d infrastructure. Free users can upload a total of 5 GB data while premium users can upload as much as 5 TB . Their application uploads the user files, which c an have a max file size of 1 TB, to an S3 Bucket. In this scenario, what is the best way for the appl ication to upload the large files in S3?", "options": [ "A. Use Multipart Upload", "B. Use a single PUT request to upload the large f ile", "C. Use AWS Import/Export", "D. Use AWS Snowball" ], "correct": "A. Use Multipart Upload", "explanation": "Explanation The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objec ts can range in size from a minimum of 0 bytes to a ma ximum of 5 terabytes. The largest object that can b e uploaded in a single PUT is 5 gigabytes. For object s larger than 100 megabytes, customers should consi der using the Multipart Upload capability. The Multipart upload API enables you to upload larg e objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: you i nitiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multi part upload request, Amazon S3 constructs the objec t from the uploaded parts and you can then access the object just as you would any other object in your bucket. Using a single PUT request to upload the large file is incorrect because the largest file size you can upload using a single PUT request is 5 GB. Files la rger than this will fail to be uploaded. Using AWS Snowball is incorrect because this is a m igration tool that lets you transfer large amounts of data from your on-premises data center to AWS S3 an d vice versa. This tool is not suitable for the giv en scenario. And when you provision Snowball, the devi ce gets transported to you, and not to your custome rs. Therefore, you bear the responsibility of securing the device. Using AWS Import/Export is incorrect because Import /Export is similar to AWS Snowball in such a way that it is meant to be used as a migration tool, an d not for multiple customer consumption such as in the given scenario. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpu overview.html https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": 96 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A solutions architect is formulating a strategy for a startup that needs to transfer 50 TB of on-premi ses data to Amazon S3. The startup has a slow network transf er speed between its data center and AWS which causes a bottleneck for data migration. Which of the following should the solutions archite ct implement?", "options": [ "A. Integrate AWS Storage Gateway File Gateway wit h the on-premises data center.", "B. Request an Import Job to Amazon S3 using a Sno wball device in the AWS Snowball", "C. Enable Amazon S3 Transfer Acceleration on the target S3 bucket.", "D. Deploy an AWS Migration Hub Discovery agent in the on-premises data center." ], "correct": "B. Request an Import Job to Amazon S3 using a Sno wball device in the AWS Snowball", "explanation": "Explanation AWS Snowball uses secure, rugged devices so you can bring AWS computing and storage capabilities to your edge environments, and transfer data into and out of AWS. The service delivers you Snowball Edge devices with storage and optional Amazon EC2 and AW S IOT Greengrass compute in shippable, hardened, secure cases. With AWS Snowball, you bring cloud ca pabilities for machine learning, data analytics, processing, and storage to your edge for migrations , short-term data collection, or even long-term deployments. AWS Snowball devices work with or with out the internet, do not require a dedicated IT operator, and are designed to be used in remote env ironments. Hence, the correct answer is: Request an Import Job to Amazon S3 using a Snowball device in the AWS Snowball Console. The option that says: Deploy an AWS Migration Hub D iscovery agent in the on-premises data center is incorrect. The AWS Migration Hub service is just a central service that provides a single location to track the progress of application migrations across multi ple AWS and partner solutions. The option that says: Enable Amazon S3 Transfer Acc eleration on the target S3 bucket is incorrect because this S3 feature is not suitable for large-s cale data migration. Enabling this feature won't al ways guarantee faster data transfer as it's only benefic ial for long-distance transfer to and from your Ama zon S3 buckets. The option that says: Integrate AWS Storage Gateway File Gateway with the on-premises data center is incorrect because this service is mostly used fo r building hybrid cloud solutions where you still n eed on- premises access to unlimited cloud storage. Based o n the scenario, this service is not the best option because you would still rely on the existing low ba ndwidth internet connection. References: 97 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/snowball https://aws.amazon.com/blogs/storage/making-it-even -simpler-to-create-and-manage-your-aws-snow- family-jobs/ Check out this AWS Snowball Cheat Sheet: https://tutorialsdojo.com/aws-snowball/ AWS Snow Family Overview: https://www.youtube.com/watch?v=9Ar-51Ip53Q", "references": "" }, { "question": "A global online sports betting company has its popu lar web application hosted in AWS. They are plannin g to develop a new online portal for their new busine ss venture and they hired you to implement the clou d architecture for a new online portal that will acce pt bets globally for world sports. You started to d esign the system with a relational database that runs on a si ngle EC2 instance, which requires a single EBS volu me that can support up to 30,000 IOPS. In this scenario, which Amazon EBS volume type can you use that will meet the performance requirements of this new online portal?", "options": [ "A. EBS General Purpose SSD (gp2)", "B. EBS Cold HDD (sc1)", "C. EBS Provisioned IOPS SSD (io1)", "D. EBS Throughput Optimized HDD (st1)" ], "correct": "C. EBS Provisioned IOPS SSD (io1)", "explanation": "Explanation The scenario requires a storage type for a relation al database with a high IOPS performance. For these scenarios, SSD volumes are more suitable to use ins tead of HDD volumes. Remember that the dominant performance attribute of SSD is IOPS while HDD is T hroughput. In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type whi ch has small, random I/O operations or large, sequential I/O operations. 98 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Since the requirement is 30,000 IOPS, you have to u se an EBS type of Provisioned IOPS SSD. This provides sustained performance for mission-critical low-latency workloads. Hence, EBS Provisioned IOPS SSD (io1) is the correct answer. EBS Throughput Optimized HDD (st1) and EBS Cold HDD (sc1) are incorrect because these are HDD volumes which are more suitable for large streaming workloads rather than transactional database workloads. EBS General Purpose SSD (gp2) is incorrect because although a General Purpose SSD volume can be used for this scenario, it does not provide the hig h IOPS required by the application, unlike the Prov isioned IOPS SSD volume. Reference: https://aws.amazon.com/ebs/details/ Amazon EBS Overview - SSD vs HDD: 99 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A company needs to use Amazon Aurora as the Amazon RDS database engine of their web application. The Solutions Architect has been instructed to impl ement a 90-day backup retention policy. Which of the following options can satisfy the give n requirement?", "options": [ "A. Configure an automated backup and set the back up retention period to 90 days.", "B. Create a daily scheduled event using CloudWatc h Events and AWS Lambda to directly", "C. Configure RDS to export the automated snapshot automatically to Amazon S3 and create a", "D. Create an AWS Backup plan to take daily snapsh ots with a retention period of 90 days." ], "correct": "D. Create an AWS Backup plan to take daily snapsh ots with a retention period of 90 days.", "explanation": "Explanation AWS Backup is a centralized backup service that mak es it easy and cost-effective for you to backup you r application data across AWS services in the AWS Clo ud, helping you meet your business and regulatory backup compliance requirements. AWS Backup makes pr otecting your AWS storage volumes, databases, and file systems simple by providing a central plac e where you can configure and audit the AWS resourc es you want to backup, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity. 100 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In this scenario, you can use AWS Backup to create a backup plan with a retention period of 90 days. A backup plan is a policy expression that defines whe n and how you want to back up your AWS resources. You assign resources to backup plans, and AWS Backu p then automatically backs up and retains backups for those resources according to the backup plan. Hence, the correct answer is: Create an AWS Backup plan to take daily snapshots with a retention period of 90 days. The option that says: Configure an automated backup and set the backup retention period to 90 days is incorrect because the maximum backup retention p eriod for automated backup is only 35 days. The option that says: Configure RDS to export the a utomated snapshot automatically to Amazon S3 and create a lifecycle policy to delete the object after 90 days is incorrect because you can't export an automated snapshot automatically to Amazon S3. You must export the snapshot manually. The option that says: Create a daily scheduled even t using CloudWatch Events and AWS Lambda to directly download the RDS automated snapshot to an S3 bucket. Archive snapshots older than 90 days to Glacier is incorrect because you cannot directly download or e xport an automated snapshot in RDS to Amazon S3. You have to copy the automated snapshot first for i t to become a manual snapshot, which you can move t o an Amazon S3 bucket. A better solution for this sce nario is to simply use AWS Backup. References: https://docs.aws.amazon.com/aws-backup/latest/devgu ide/create-a-scheduled-backup.html https://aws.amazon.com/backup/faqs/ Check out these AWS Cheat Sheets: 101 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/links-to-all-aws-cheat-sh eets/", "references": "" }, { "question": ": A company is deploying a Microsoft SharePoint Serve r environment on AWS using CloudFormation. The Solutions Architect needs to install and configure the architecture that is composed of Microsoft Acti ve Directory (AD) domain controllers, Microsoft SQL Se rver 2012, multiple Amazon EC2 instances to host the Microsoft SharePoint Server and many other depe ndencies. The Architect needs to ensure that the required components are properly running before the stack creation proceeds. Which of the following should the Architect do to m eet this requirement?", "options": [ "A. Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a", "B. Configure the DependsOn attribute in the Cloud Formation template. Send a success signal", "C. Configure a CreationPolicy attribute to the in stance in the CloudFormation template. Send", "D. Configure a UpdatePolicy attribute to the inst ance in the CloudFormation template. Send a" ], "correct": "C. Configure a CreationPolicy attribute to the in stance in the CloudFormation template. Send", "explanation": "Explanation You can associate the CreationPolicy attribute with a resource to prevent its status from reaching cre ate complete until AWS CloudFormation receives a specif ied number of success signals or the timeout period is exceeded. To signal a resource, you can use the cfn-signal helper script or SignalResource API. AWS CloudFormation publishes valid signals to the stack events so that you track the number of signals sen t. The creation policy is invoked only when AWS CloudF ormation creates the associated resource. Currently , the only AWS CloudFormation resources that support creation policies are AWS::AutoScaling::AutoScalingGroup, AWS::EC2::Insta nce, and AWS::CloudFormation::WaitCondition. 102 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Use the CreationPolicy attribute when you want to w ait on resource configuration actions before stack creation proceeds. For example, if you install and configure software applications on an EC2 instance, you might want those applications to be running before proceeding. In such cases, you can add a CreationPo licy attribute to the instance, and then send a success signal to the instance after the applications are i nstalled and configured. Hence, the option that says: Configure a CreationPo licy attribute to the instance in the CloudFormation template. Send a success signal afte r the applications are installed and configured using the cfn-signal helper script is correct. The option that says: Configure the DependsOn attri bute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script is incorrect because the cfn-init helper script is not suitable to be used to signal another resource. Yo u have to use cfn-signal instead. And although you can use th e DependsOn attribute to ensure the creation of a specific resource follows another, it is still bett er to use the CreationPolicy attribute instead as i t ensures that the applications are properly running before t he stack creation proceeds. The option that says: Configure a UpdatePolicy attr ibute to the instance in the CloudFormation template. Send a success signal after the applicati ons are installed and configured using the cfn-sign al helper script is incorrect because the UpdatePolicy attribute is primarily used for updating resources and for stack update rollback operations. The option that says: Configure the UpdateReplacePo licy attribute in the CloudFormation template. Send a success signal after the applications are in stalled and configured using the cfn-signal helper script is incorrect because the UpdateReplacePolicy attribute is primarily used to retain or in some c ases, back up the existing physical instance of a resourc e when it is replaced during a stack update operati on. 103 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/aws-attribute-creationpolicy.html https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/deploying.applications.html#deploy ment-walkthrough-cfn-signal https://aws.amazon.com/blogs/devops/use-a-creationp olicy-to-wait-for-on-instance-configurations/ Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY", "references": "" }, { "question": ": A company needs to collect gigabytes of data per se cond from websites and social media feeds to gain insights on its product offerings and continuously improve the user experience. To meet this design requirement, you have developed an application host ed on an Auto Scaling group of Spot EC2 instances which processes the data and stores the results to DynamoDB and Redshift. The solution should have a built-in enhanced fan-out feature. Which fully-managed AWS service can you use to coll ect and process large streams of data records in re al- time with the LEAST amount of administrative overhe ad?", "options": [ "A. Amazon Redshift with AWS Cloud Development Kit (AWS CDK)", "B. Amazon Managed Streaming for Apache Kafka (Ama zon MSK)", "C. Amazon Kinesis Data Streams", "D. Amazon S3 Access Points" ], "correct": "C. Amazon Kinesis Data Streams", "explanation": "Explanation Amazon Kinesis Data Streams is used to collect and process large streams of data records in real-time. You can use Kinesis Data Streams for rapid and cont inuous data intake and aggregation. The type of dat a used includes IT infrastructure log data, applicati on logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real-time, the processing is typically lightweight. 104 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The following diagram illustrates the high-level ar chitecture of Kinesis Data Streams. The producers continually push data to Kinesis Data Streams, and the consumers process the data in real-time. Consum ers (such as a custom application running on Amazon EC2 or an Amazon Kinesis Data Firehose delivery stream) can store their results using an AWS servic e such as Amazon DynamoDB, Amazon Redshift, or Amazon S3. Hence, the correct answer is: Amazon Kinesis Data S treams. Amazon S3 Access Points is incorrect because this i s mainly used to manage access of your S3 objects. Amazon S3 access points are named network endpoints that are attached to buckets that you can use to perform S3 object operations, such as uploading and retrieving objects. Amazon Redshift with AWS Cloud Development Kit (AWS CDK) is incorrect because this is mainly used for data warehousing making it simple and cost -effective to analyze your data across your data warehouse and data lake. Again, it does not meet th e requirement of being able to collect and process large streams of data in real-time. Using the AWS Cloud D evelopment Kit (AWS CDK) with Amazon Redshift still won't satisfy this requirement. Amazon Managed Streaming for Apache Kafka (Amazon M SK) is incorrect. Although you can process streaming data in real-time with Amazon MSK, this s ervice still entails a lot of administrative overhe ad, unlike Amazon Kinesis. Moreover, it doesn't have a built-in enhanced fan-out feature as required in th e scenario. References: https://docs.aws.amazon.com/streams/latest/dev/intr oduction.html 105 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/kinesis/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company plans to build a web architecture using O n-Demand EC2 instances and a database in AWS. However, due to budget constraints, the company ins tructed the Solutions Architect to choose a databas e service in which they no longer need to worry about database management tasks such as hardware or software provisioning, setup, configuration, scalin g, and backups. Which of the following services should the Solution s Architect recommend?", "options": [ "A. Amazon ElastiCache", "B. Amazon DynamoDB", "C. Amazon RDS", "D. Amazon Redshift" ], "correct": "B. Amazon DynamoDB", "explanation": "Explanation Basically, a database service in which you no longe r need to worry about database management tasks suc h as hardware or software provisioning, setup, and co nfiguration is called a fully managed database. Thi s means that AWS fully manages all of the database ma nagement tasks and the underlying host server. The main differentiator here is the keyword \"scaling\" i n the question. In RDS, you still have to manually scale up your resources and create Read Replicas to impro ve scalability while in DynamoDB, this is automatically done. Amazon DynamoDB is the best option to use in this s cenario. It is a fully managed non-relational datab ase service you simply create a database table, set yo ur target utilization for Auto Scaling, and let the service handle the rest. You no longer need to worry about database management tasks such as hardware or software provisioning, setup, and configuration, so ftware patching, operating a reliable, distributed database cluster, or partitioning data over multipl e instances as you scale. DynamoDB also lets you ba ckup and restore all your tables for data archival, help ing you meet your corporate and governmental regula tory requirements. 106 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon RDS is incorrect because this is just a \"man aged\" service and not \"fully managed\". This means that you still have to handle the backups and other administrative tasks such as when the automated OS patching will take place. Amazon ElastiCache is incorrect. Although ElastiCac he is fully managed, it is not a database service b ut an In-Memory Data Store. Amazon Redshift is incorrect. Although this is full y managed, it is not a database service but a Data Warehouse. References: https://aws.amazon.com/dynamodb/ https://aws.amazon.com/products/databases/ Check out this Amazon DynamoDB Cheat Sheet: 107 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-dynamodb/", "references": "" }, { "question": ":A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be used for launching a fleet of EC2 instances. Whi ch of the following changes needs to be done?", "options": [ "A. Create a new target group.", "B. Do nothing. You can start directly launching E C2 instances in the Auto Scaling group with", "C. Create a new launch configuration.", "D. Create a new target group and launch configura tion." ], "correct": "C. Create a new launch configuration.", "explanation": "Explanation A launch configuration is a template that an Auto S caling group uses to launch EC2 instances. When you create a launch configuration, you specify informat ion for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to lau nch the instance. You can specify your launch configuration with mult iple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scalin g group at a time, and you can't modify a launch configuration after you've created it. Therefore, i f you want to change the launch configuration for a n Auto Scaling group, you must create a launch configurati on and then update your Auto Scaling group with the new launch configuration. For this scenario, you have to create a new launch configuration. Remember that you can't modify a lau nch configuration after you've created it. 108 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is: Create a new launch c onfiguration. The option that says: Do nothing. You can start dir ectly launching EC2 instances in the Auto Scaling group with the same launch configuration is incorre ct because what you are trying to achieve is change the AMI being used by your fleet of EC2 instances. Therefore, you need to change the launch configurat ion to update what your instances are using. The option that says: create a new target group and create a new target group and launch configuration are both incorrect because you only want to change the AMI being used by your instances, and not the instances themselves. Target groups are primarily u sed in ELBs and not in Auto Scaling. The scenario didn't mention that the architecture has a load bal ancer. Therefore, you should be updating your launc h configuration, not the target group. References: http://docs.aws.amazon.com/autoscaling/latest/userg uide/LaunchConfiguration.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroup.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A large financial firm needs to set up a Linux bast ion host to allow access to the Amazon EC2 instance s running in their VPC. For security purposes, only t he clients connecting from the corporate external p ublic IP address 175.45.116.100 should have SSH access to the host. Which is the best option that can meet the customer 's requirement?", "options": [ "A. Security Group Inbound Rule: Protocol UDP, Po rt Range 22, Source 175.45.116.100/32", "B. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 175.45.116.100/32", "C. Network ACL Inbound Rule: Protocol TCP, Port Range-22, Source 175.45.116.100/0", "D. Network ACL Inbound Rule: Protocol UDP, Port Range 22, Source 175.45.116.100/32" ], "correct": "B. Security Group Inbound Rule: Protocol TCP. Po rt Range 22, Source 175.45.116.100/32", "explanation": "Explanation 109 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A bastion host is a special purpose computer on a n etwork specifically designed and configured to withstand attacks. The computer generally hosts a s ingle application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. When setting up a bastion host in AWS, you should o nly allow the individual IP of the client and not t he entire network. Therefore, in the Source, the prope r CIDR notation should be used. The /32 denotes one IP address and the /0 refers to the entire network. The option that says: Security Group Inbound Rule: Protocol UDP, Port Range 22, Source 175.45.116.100/32 is incorrect since the SSH protoc ol uses TCP and port 22, and not UDP. The option that says: Network ACL Inbound Rule: Pro tocol UDP, Port Range 22, Source 175.45.116.100/32 is incorrect since the SSH protoc ol uses TCP and port 22, and not UDP. Aside from th at, network ACLs act as a firewall for your whole VPC s ubnet while security groups operate on an instance level. Since you are securing an EC2 instance, you should be using security groups. The option that says: Network ACL Inbound Rule: Pro tocol TCP, Port Range-22, Source 175.45.116.100/0 is incorrect as it allowed the ent ire network instead of a single IP to gain access t o the host.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.html 110 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" }, { "question": ": A Solutions Architect is managing a company's AWS a ccount of approximately 300 IAM users. They have a new company policy that requires changing the ass ociated permissions of all 100 IAM users that contr ol the access to Amazon S3 buckets. What will the Solutions Architect do to avoid the t ime-consuming task of applying the policy to each u ser? A. Create a new policy and apply it to multiple IAM users using a shell script.", "options": [ "B. Create a new S3 bucket access policy with unli mited access for each IAM user.", "C. Create a new IAM role and add each user to the IAM role.", "D. Create a new IAM group and then add the users that require access to the S3 bucket." ], "correct": "D. Create a new IAM group and then add the users that require access to the S3 bucket.", "explanation": "Explanation In this scenario, the best option is to group the s et of users in an IAM Group and then apply a policy with the required access to the Amazon S3 bucket. T his will enable you to easily add, remove, and manage the users instead of manually adding a polic y to each and every 100 IAM users. Creating a new policy and applying it to multiple I AM users using a shell script is incorrect because you need a new IAM Group for this scenario and not assign a policy to each user via a shell script. Th is method can save you time but afterward, it will be difficult to manage all 100 users that are not cont ained in an IAM Group. Creating a new S3 bucket access policy with unlimit ed access for each IAM user is incorrect because you need a new IAM Group and the method is also tim e-consuming. Creating a new IAM role and adding each user to the IAM role is incorrect because you need to use an IAM Group and not an IAM role. 111 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ groups.html AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/ 112 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company needs to launch an Amazon EC2 instance wi th persistent block storage to host its application . The stored data must be encrypted at rest. Which of the following is the most suitable storage solution in this scenario?", "options": [ "A. Amazon EBS volume with server-side encryption (SSE) enabled.", "B. Amazon EC2 Instance Store with SSL encryption.", "C. Encrypted Amazon EBS volume using AWS KMS.", "D. Encrypted Amazon EC2 Instance Store using AWS KMS." ], "correct": "C. Encrypted Amazon EBS volume using AWS KMS.", "explanation": "Explanation Amazon Elastic Block Store (Amazon EBS) provides bl ock-level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances. EBS volumes that are att ached to an instance are exposed as storage volumes that persist independently from the life of the ins tance. Amazon EBS is the persistent block storage volume a mong the options given. It is mainly used as the ro ot volume to store the operating system of an EC2 inst ance. To encrypt an EBS volume at rest, you can use AWS KMS customer master keys for the encryption of both the boot and data volumes of an EC2 instance. 113 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is: Encrypted Amazon EBS volume using AWS KMS. The options that say: Amazon EC2 Instance Store wit h SSL encryption and Encrypted Amazon EC2 Instance Store using AWS KMS are both incorrect bec ause the scenario requires persistent block storage and not temporary storage. Also, enabling SSL is no t a requirement in the scenario as it is primarily used to encrypt data in transit. The option that says: Amazon EBS volume with server -side encryption (SSE) enabled is incorrect because EBS volumes are only encrypted using AWS KM S. Server-side encryption (SSE) is actually an option for Amazon S3, but not for Amazon EC2. References: https://aws.amazon.com/ebs/faqs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A company is generating confidential data that is s aved on their on-premises data center. As a backup solution, the company wants to upload their data to an Amazon S3 bucket. In compliance with its intern al security mandate, the encryption of the data must b e done before sending it to Amazon S3. The company must spend time managing and rotating the encryptio n keys as well as controlling who can access those keys. Which of the following methods can achieve this req uirement? (Select TWO.)", "options": [ "A. Set up Client-Side Encryption using a client-s ide master key.", "B. Set up Client-Side Encryption with a customer master key stored in AWS Key Management", "C. Set up Client-Side Encryption with Amazon S3 m anaged encryption keys.", "D. Set up Server-Side Encryption (SSE) with EC2 k ey pair." ], "correct": "", "explanation": "Explanation 114 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Data protection refers to protecting data while in- transit (as it travels to and from Amazon S3) and a t rest (while it is stored on disks in Amazon S3 data cent ers). You can protect data in transit by using SSL or by using client-side encryption. You have the followin g options for protecting data at rest in Amazon S3: Use Server-Side Encryption You request Amazon S3 t o encrypt your object before saving it on disks in its data centers and decrypt it when you download t he objects. Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Key s (SSE-KMS) Use Server-Side Encryption with Customer-Provided K eys (SSE-C) Use Client-Side Encryption You can encrypt data cl ient-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process , the encryption keys, and related tools. Use Client-Side Encryption with AWS KMSManaged Cust omer Master Key (CMK) Use Client-Side Encryption Using a Client-Side Mast er Key Hence, the correct answers are: 115 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - Set up Client-Side Encryption with a customer mas ter key stored in AWS Key Management Service (AWS KMS). - Set up Client-Side Encryption using a client-side master key. The option that says: Set up Server-Side Encryption with keys stored in a separate S3 bucket is incorrect because you have to use AWS KMS to store your encryption keys or alternatively, choose an AWS-managed CMK instead to properly implement Serve r-Side Encryption in Amazon S3. In addition, storing any type of encryption key in Amazon S3 is actually a security risk and is not recommended. The option that says: Set up Client-Side encryption with Amazon S3 managed encryption keys is incorrect because you can't have an Amazon S3 manag ed encryption key for client-side encryption. As it s name implies, an Amazon S3 managed key is fully man aged by AWS and also rotates the key automatically without any manual intervention. For this scenario, you have to set up a customer master key (CMK) in AWS KMS that you can manage, rotate, and a udit or alternatively, use a client-side master key that you manually maintain. The option that says: Set up Server-Side encryption (SSE) with EC2 key pair is incorrect because you can't use a key pair of your Amazon EC2 instance fo r encrypting your S3 bucket. You have to use a clie nt- side master key or a customer master key stored in AWS KMS. References: http://docs.aws.amazon.com/AmazonS3/latest/dev/Usin gEncryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngClientSideEncryption.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company deployed several EC2 instances in a priva te subnet. The Solutions Architect needs to ensure the security of all EC2 instances. Upon checking the ex isting Inbound Rules of the Network ACL, she saw th is configuration: 116 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam If a computer with an IP address of 110.238.109.37 sends a request to the VPC, what will happen?", "options": [ "A. Initially, it will be allowed and then after a while, the connection will be denied.", "B. It will be denied.", "C. Initially, it will be denied and then after a while, the connection will be allowed.", "D. It will be allowed." ], "correct": "D. It will be allowed.", "explanation": "Explanation Rules are evaluated starting with the lowest number ed rule. As soon as a rule matches traffic, it's ap plied immediately regardless of any higher-numbered rule that may contradict it. 117 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam We have 3 rules here: 1. Rule 100 permits all traffic from any source. 118 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam 2. Rule 101 denies all traffic coming from 110.238. 109.37 3. The Default Rule (*) denies all traffic from any source. The Rule 100 will first be evaluated. If there is a match then it will allow the request. Otherwise, i t will then go to Rule 101 to repeat the same process unti l it goes to the default rule. In this case, when t here is a request from 110.238.109.37, it will go through Rul e 100 first. As Rule 100 says it will permit all tr affic from any source, it will allow this request and wil l not further evaluate Rule 101 (which denies 110.238.109.37) nor the default rule.", "references": "http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_ACLs.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" }, { "question": ": A company currently has an Augment Reality (AR) mob ile game that has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store dat a. How would you design the application to improve its overall performance and make it more scalable whil e keeping the costs low? (Select TWO.)", "options": [ "A. Enable DynamoDB Accelerator (DAX) and ensure t hat the Auto Scaling is enabled and", "B. Configure CloudFront with DynamoDB as the orig in; cache frequently accessed data on the", "C. Use API Gateway in conjunction with Lambda and turn on the caching on frequently", "D. Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU" ], "correct": "", "explanation": "Explanation The correct answers are the options that say: - Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. - Use API Gateway in conjunction with Lambda and tu rn on the caching on frequently accessed data and enable DynamoDB global replication. Amazon DynamoDB Accelerator (DAX) is a fully manage d, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance impr ovement from milliseconds to microseconds even at millions of requests per second. DAX does a ll the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requi ring developers to manage cache invalidation, data population, or cluster management. 120 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon API Gateway lets you create an API that acts as a \"front door\" for applications to access data, business logic, or functionality from your back-end services, such as code running on AWS Lambda. Amazon API Gateway handles all of the tasks involve d in accepting and processing up to hundreds of thousands of concurrent API calls, including traffi c management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. AWS Lambda scales your functions automatically on y our behalf. Every time an event notification is received for your function, AWS Lambda quickly loca tes free capacity within its compute fleet and runs 121 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam your code. Since your code is stateless, AWS Lambda can start as many copies of your function as neede d without lengthy deployment and configuration delays . The option that says: Configure CloudFront with Dyn amoDB as the origin; cache frequently accessed data on the client device using ElastiCache is inco rrect. Although CloudFront delivers content faster to your users using edge locations, you still cannot i ntegrate DynamoDB table with CloudFront as these tw o are incompatible. The option that says: Use AWS SSO and Cognito to au thenticate users and have them directly access DynamoDB using single-sign on. Manually set the pro visioned read and write capacity to a higher RCU and WCU is incorrect because AWS Single Sign-On (SSO) is a cloud SSO service that just makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. This will not be of much help on the scalability and performance of the application. It is costly to manually set the provisioned read and write capacity to a higher RCU and WCU because this capacity will run round the clock and will still be the same even if the incomi ng traffic is stable and there is no need to scale. The option that says: Since Auto Scaling is enabled by default, the provisioned read and write capacit y will adjust automatically. Also enable DynamoDB Acc elerator (DAX) to improve the performance from milliseconds to microseconds is incorrect beca use, by default, Auto Scaling is not enabled in a DynamoDB table which is created using the AWS CLI. References: https://aws.amazon.com/lambda/faqs/ https://aws.amazon.com/api-gateway/faqs/ https://aws.amazon.com/dynamodb/dax/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A large financial firm in the country has an AWS en vironment that contains several Reserved EC2 instances hosting a web application that has been d ecommissioned last week. To save costs, you need to stop incurring charges for the Reserved instances a s soon as possible. What cost-effective steps will you take in this cir cumstance? (Select TWO.)", "options": [ "A. Contact AWS to cancel your AWS subscription.", "B. Go to the Amazon.com online shopping website a nd sell the Reserved instances.", "C. Go to the AWS Reserved Instance Marketplace an d sell the Reserved instances.", "D. Terminate the Reserved instances as soon as po ssible to avoid getting billed at the on-" ], "correct": "", "explanation": "Explanation The Reserved Instance Marketplace is a platform tha t supports the sale of third-party and AWS customers' unused Standard Reserved Instances, whic h vary in terms of lengths and pricing options. For example, you may want to sell Reserved Instances af ter moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, o r if you have unneeded capacity. Hence, the correct answers are: - Go to the AWS Reserved Instance Marketplace and s ell the Reserved instances. 123 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - Terminate the Reserved instances as soon as possi ble to avoid getting billed at the on-demand price when it expires. Stopping the Reserved instances as soon as possible is incorrect because a stopped instance can still be restarted. Take note that when a Reserved Instance expires, any instances that were covered by the Reserved Instance are billed at the on-demand price which costs significantly higher. Since the applic ation is already decommissioned, there is no point of kee ping the unused instances. It is also possible that there are associated Elastic IP addresses, which will inc ur charges if they are associated with stopped inst ances Contacting AWS to cancel your AWS subscription is i ncorrect as you don't need to close down your AWS account. Going to the Amazon.com online shopping website and selling the Reserved instances is incorrect as you have to use AWS Reserved Instance Marketplace t o sell your instances. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-market-general.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-instance-lifecycle.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A company generates large financial datasets with m illions of rows. The Solutions Architect needs to s tore all the data in a columnar fashion to reduce the nu mber of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an exist ing third-party business intelligence application t hat will connect to the storage service and then generate da ily and monthly financial reports for its clients a round the globe. In this scenario, which is the best storage service to use to meet the requirement?", "options": [ "A. Amazon Redshift", "B. Amazon RDS", "C. Amazon DynamoDB", "D. Amazon Aurora" ], "correct": "A. Amazon Redshift", "explanation": "Explanation Amazon Redshift is a fast, scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift delivers ten times faster performance tha n other data warehouses by using machine learning, ma ssively parallel query execution, and columnar storage on high-performance disk. In this scenario, there is a requirement to have a storage service that will be used by a business int elligence application and where the data must be stored in a columnar fashion. Business Intelligence reporting systems are a type of Online Analytical Processing (OLAP) which Redshift is known to support. In addition, Redshift also provides columnar storage, unlike the other options. Hence, the correct answer in this scenario is Amazo n Redshift. References: https://docs.aws.amazon.com/redshift/latest/dg/c_co lumnar_storage_disk_mem_mgmnt.html https://aws.amazon.com/redshift/ Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/ Here is a case study on finding the most suitable a nalytical tool - Kinesis vs EMR vs Athena vs Redshift: https://youtu.be/wEOm6aiN4ww", "references": "" }, { "question": ": A Solutions Architect needs to set up a bastion hos t in the cheapest, most secure way. The Architect s hould be the only person that can access it via SSH. Which of the following steps would satisfy this req uirement?", "options": [ "A. Set up a large EC2 instance and a security gro up that only allows access on port 22", "B. Set up a large EC2 instance and a security gro up that only allows access on port 22 via your", "C. Set up a small EC2 instance and a security gro up that only allows access on port 22 via your", "D. Set up a small EC2 instance and a security gro up that only allows access on port 22" ], "correct": "C. Set up a small EC2 instance and a security gro up that only allows access on port 22 via your", "explanation": "Explanation A bastion host is a server whose purpose is to prov ide access to a private network from an external network, such as the Internet. Because of its expos ure to potential attack, a bastion host must minimi ze the chances of penetration. To create a bastion host, you can create a new EC2 instance which should only have a security group fr om a particular IP address for maximum security. Since the cost is also considered in the question, you s hould choose a small instance for your host. By default, t2.micro instance is used by AWS but you can change these settings during deployment. Setting up a large EC2 instance and a security grou p which only allows access on port 22 via your IP address is incorrect because you don't need to prov ision a large EC2 instance to run a single bastion host. At the same time, you are looking for the cheapest solution possible. 126 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The options that say: Set up a large EC2 instance a nd a security group which only allows access on port 22 and Set up a small EC2 instance and a secur ity group which only allows access on port 22 are both incorrect because you did not set your specifi c IP address to the security group rules, which pos sibly means that you publicly allow traffic from all sour ces in your security group. This is wrong as you sh ould only be the one to have access to the bastion host. References: https://docs.aws.amazon.com/quickstart/latest/linux -bastion/architecture.html certkingdom. https://aws.amazon.com/blogs/security/how-to-record -ssh-sessions-established-through-a-bastion-host/ Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ certkingdom.", "references": "" }, { "question": ": An online stocks trading application that stores fi nancial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise aud it can happen at anytime and you should be able to ret rieve the required data in under 15 minutes under a ll circumstances. Your manager instructed you to ensur e that retrieval capacity is available when you nee d it and should handle up to 150 MB/s of retrieval throu ghput. certkingdom. Which of the following should you do to meet the ab ove requirement? (Select TWO.)", "options": [ "A. Specify a range, or portion, of the financial data archive to retrieve.", "B. Use Bulk Retrieval to access the financial dat a.", "C. Purchase provisioned retrieval capacity.", "D. Retrieve the data using Amazon Glacier Select." ], "correct": "", "explanation": "Explanation Expedited retrievals allow you to quickly access yo ur data when occasional urgent requests for a subse t of certkingdom. archives are required. For all but the largest arch ives (250 MB+), data accessed using Expedited retri evals are typically made available within 15 minutes. Pro visioned Capacity ensures that retrieval capacity f or Expedited retrievals is available when you need it. 127 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam To make an Expedited, Standard, or Bulk retrieval, set the Tier parameter in the Initiate Job (POST jo bs) REST API request to the option you want, or the equ ivalent in the AWS CLI or AWS SDKs. If you have purchased provisioned capacity, then all expedited retrievals are automatically served through your provisioned capacity. Provisioned capacity ensures that your retrieval ca pacity for expedited retrievals is available when y ou need it. Each unit of capacity provides that at lea st three expedited retrievals can be performed ever y five minutes and provides up to 150 MB/s of retrieval th roughput. You should purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes. certkingdom. Without provisioned capacity Expedited retrievals a re accepted, except for rare situations of unusuall y high demand. However, if you require access to Expedited retrievals under all circumstances, you must purch ase provisioned retrieval capacity. certkingdom. certkingdom. Retrieving the data using Amazon Glacier Select is incorrect because this is not an archive retrieval option and is primarily used to perform filtering o perations using simple Structured Query Language (S QL) statements directly on your data archive in Glacier . certkingdom. Using Bulk Retrieval to access the financial data i s incorrect because bulk retrievals typically compl ete within 512 hours hence, this does not satisfy the r equirement of retrieving the data within 15 minutes . The provisioned capacity option is also not compatible with Bulk retrievals. Specifying a range, or portion, of the financial da ta archive to retrieve is incorrect because using ranged archive retrievals is not enough to meet the requirement of retrieving the whole archive in the given timeframe. In addition, it does not provide additio nal retrieval capacity which is what the provisione d certkingdom. capacity option can offer. References: 128 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/amazonglacier/latest/de v/downloading-an-archive-two-steps.html https://docs.aws.amazon.com/amazonglacier/latest/de v/glacier-select.html Check out this Amazon S3 Glacier Cheat Sheet: https://tutorialsdojo.com/amazon-glacier/", "references": "" }, { "question": ": A company has a web application that is relying ent irely on slower disk-based databases, causing it to certkingdom. perform slowly. To improve its performance, the Sol utions Architect integrated an in-memory data store to the web application using ElastiCache. How does Amazon ElastiCache improve database perfor mance?", "options": [ "A. By caching database query results.", "B. It reduces the load on your database by routin g read queries from your applications to the", "C. It securely delivers data to customers globall y with low latency and high transfer speeds.", "D. It provides an in-memory cache that delivers u p to 10x performance improvement from" ], "correct": "A. By caching database query results.", "explanation": "Explanation ElastiCache improves the performance of your databa se through caching query results. The primary purpose of an in-memory key-value store is to provide ultra-fast (submillisecond latency) and inexpensive access to copies of data. Most data sto res have areas of data that are frequently accessed but certkingdom. seldom updated. Additionally, querying a database i s always slower and more expensive than locating a key in a key-value pair cache. Some database querie s are especially expensive to perform, for example, queries that involve joins across multiple tables o r queries with intensive calculations. By caching such query results, you pay the price of the query once and then are able to quickly retrie ve the data multiple times without having to re-execute th e query. certkingdom. 129 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. The option that says: It securely delivers data to customers globally with low latency and high transf er speeds is incorrect because this option describes w hat CloudFront does and not ElastiCache. certkingdom. The option that says: It provides an in-memory cach e that delivers up to 10x performance improvement from milliseconds to microseconds or ev en at millions of requests per second is incorrect because this option describes what Amazon DynamoDB Accelerator (DAX) does and not ElastiCache. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB. Amazon ElastiCache cannot provi de a performance improvement from milliseconds to microseconds, let alone millions of requests per second like DAX can. certkingdom. The option that says: It reduces the load on your d atabase by routing read queries from your applications to the Read Replica is incorrect becau se this option describes what an RDS Read Replica does and not ElastiCache. Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Regio n or in a different AWS Region. References: certkingdom. https://aws.amazon.com/elasticache/ https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/elasticache-use-cases.html Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ certkingdom.", "references": "" }, { "question": ": 130 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trigger the Amazon EC2 API to request 50 EC2 instan ces in a single Availability Zone. However, you noticed that after 20 successful requests, subseque nt requests failed. What could be a reason for this issue and how would you resolve it?", "options": [ "A. By default, AWS allows you to provision a maximum of 20 instances per region. Select a different", "B. There was an issue with the Amazon EC2 API. Just resend the requests and these will be", "C. By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select" ], "correct": "", "explanation": "Explanation You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Sp ot Instances per your dynamic Spot limit per region . New AWS accounts may start with limits that are low er than the limits described here. 131 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. If you need more instances, complete the Amazon EC2 limit increase request form with your use case, an d certkingdom. your limit increase will be considered. Limit incre ases are tied to the region they were requested for . Hence, the correct answer is: There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit i ncrease form to AWS and retry the failed requests o nce approved. The option that says: There was an issue with the A mazon EC2 API. Just resend the requests and these certkingdom. will be provisioned successfully is incorrect becau se you are limited to running On-Demand Instances p er your vCPU-based On-Demand Instance limit. There is also a limit of purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region hence, there is no problem with t he EC2 API. The option that says: By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed requ est is incorrect. There is no need to select a diff erent certkingdom. region since this limit can be increased after subm itting a request form to AWS. 132 of 137 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request is incorrect beca use the vCPU-based On-Demand Instance limit is set per region and not per Availa bility Zone. This can be increased after submitting a request form to AWS. References: https://docs.aws.amazon.com/general/latest/gr/aws_s ervice_limits.html#limits_ec2 https://aws.amazon.com/ec2/faqs/#How_many_instances _can_I_run_in_Amazon_EC2 certkingdom. Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A company has a decoupled application in AWS using EC2, Auto Scaling group, S3, and SQS. The certkingdom. Solutions Architect designed the architecture in su ch a way that the EC2 instances will consume the message from the SQS queue and will automatically s cale up or down based on the number of messages in the queue. In this scenario, which of the following statements is false about SQS?", "options": [ "A. Amazon SQS can help you build a distributed ap plication with decoupled components.", "B. FIFO queues provide exactly-once processing.", "C. Standard queues preserve the order of messages .", "D. Standard queues provide at-least-once delivery , which means that each message is delivered" ], "correct": "D. Standard queues provide at-least-once delivery , which means that each message is delivered", "explanation": "Explanation certkingdom. All of the answers are correct except for the optio n that says: Standard queues preserve the order of messages. Only FIFO queues can preserve the order o f messages and not standard queues.", "references": "certkingdom. https://aws.amazon.com/sqs/faqs/ 133 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" }, { "question": ": A production MySQL database hosted on Amazon RDS is running out of disk storage. The management has consulted its solutions architect to increase t he disk space without impacting the database perfor mance. How can the solutions architect satisfy the require ment with the LEAST operational overhead? certkingdom.", "options": [ "A. Change the default_storage_engine of the DB in stance's parameter group to MyISAM.", "B. Modify the DB instance storage type to Provisi oned IOPS.", "C. Modify the DB instance settings and enable sto rage autoscaling.", "D. Increase the allocated storage for the DB inst ance." ], "correct": "C. Modify the DB instance settings and enable sto rage autoscaling.", "explanation": "Explanation RDS Storage Auto Scaling automatically scales stora ge capacity in response to growing database workloads, with zero downtime. certkingdom. Under-provisioning could result in application down time, and over-provisioning could result in underutilized resources and higher costs. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the rest. RDS Storage Auto Scaling continuously monitors actu al storage consumption, and scales capacity up automatically when actual utilization approaches pr ovisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS certkingdom. Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications. Hence, the correct answer is: Modify the DB instanc e settings and enable storage autoscaling. certkingdom. 134 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Increase the allocated storag e for the DB instance is incorrect. Although this w ill solve the problem of low disk space, increasing the allocated storage might cause performance degradat ion during the change. The option that says: Change the default_storage_en gine of the DB instance's parameter group to MyISAM is incorrect. This is just a storage engine for MySQL. It won't increase the disk space in any way. The option that says: Modify the DB instance storag e type to Provisioned IOPS is incorrect. This may improve disk performance but it won't solve the pro blem of low database storage. References: https://aws.amazon.com/about-aws/whats-new/2019/06/ rds-storage-auto-scaling/ certkingdom. https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_PIOPS.StorageTypes.html#USER_PI OPS.Autoscaling Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": certkingdom. A company installed sensors to track the number of people who visit the park. The data is sent every d ay to an Amazon Kinesis stream with default settings for processing, in which a consumer is configured to process the data every other day. You noticed that the S3 bucket is not receiving all of the data that is being sent to the Kinesis stream. You checked the sensors if they are properly sending the data to Amazon Kinesis and verified that the data is indeed sent e very day. certkingdom. What could be the reason for this? A. By default, Amazon S3 stores the data for 1 da y and moves it to Amazon Glacier.", "options": [ "B. There is a problem in the sensors. They probab ly had some intermittent connection hence,", "C. By default, the data records are only accessib le for 24 hours from the time they are added to", "D. Your AWS account was hacked and someone has delet ed some data in your Kinesis stream." ], "correct": "C. By default, the data records are only accessib le for 24 hours from the time they are added to", "explanation": "Explanation Kinesis Data Streams supports changes to the data r ecord retention period of your stream. A Kinesis da ta stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stre am temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis data stream stores records from 2 4 hours by default to a maximum of 8760 hours (365 days). This is the reason why there are missing data in yo ur S3 bucket. To fix this, you can either configure your sensors to send the data everyday instead of every other day or alternatively, you can increase the re tention period of your Kinesis data stream. The option that says: There is a problem in the sen sors. They probably had some intermittent connection hence, the data is not sent to the strea m is incorrect. You already verified that the senso rs are working as they should be hence, this is not the ro ot cause of the issue. The option that says: By default, Amazon S3 stores the data for 1 day and moves it to Amazon Glacier is incorrect because by default, Amazon S3 does not store the data for 1 day only and move it to Amazo n Glacier. The option that says: Your AWS account was hacked a nd someone has deleted some data in your Kinesis stream is incorrect. Although this could be a possibility, you should verify first if there ar e other more probable reasons for the missing data in your S3 bucket. Be sure to follow and apply security bes t practices as well to prevent being hacked by someon e. By default, the data records are only accessible fo r 24 hours from the time they are added to a Kinesis stream, which depicts the root cause of thi s issue. 136 of 137 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "http://docs.aws.amazon.com/streams/latest/dev/kines is-extended-retention.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/ 137 of 137 Exam D" }, { "question": ": An auto scaling group of Linux EC2 instances is cre ated with basic monitoring enabled in CloudWatch. You noticed that your application is slow so you as ked one of your engineers to check all of your EC2 instances. After checking your instances, you notic ed that the auto scaling group is not launching mor e instances as it should be, even though the servers already have high memory usage. Which of the following options should the Architect implement to solve this issue?", "options": [ "A. Enable detailed monitoring on the instances.", "B. Install AWS SDK in the EC2 instances. Create a script that will trigger the Auto Scaling", "C. Modify the scaling policy to increase the thre shold to scale out the number of instances.", "D. Install the CloudWatch agent to the EC2 instan ces which will trigger your Auto Scaling" ], "correct": "D. Install the CloudWatch agent to the EC2 instan ces which will trigger your Auto Scaling", "explanation": "Explanation Amazon CloudWatch agent enables you to collect both system metrics and log files from Amazon EC2 instances and on-premises servers. The agent suppor ts both Windows Server and Linux and allows you to select the metrics to be collected, including sub-r esource metrics such as per-CPU core. Certkingdom 2 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The premise of the scenario is that the EC2 servers have high memory usage, but since this specific me tric is not tracked by the Auto Scaling group by default , the scaling out activity is not being triggered. Remember that by default, CloudWatch doesn't monito r memory usage but only the CPU utilization, Network utilization, Disk performance, and Disk Rea ds/Writes. This is the reason why you have to install a CloudW atch agent in your EC2 instances to collect and mon itor the custom metric (memory usage), which will be use d by your Auto Scaling Group as a trigger for scali ng Certkingdom activities. Hence, the correct answer is: Install the CloudWatc h agent to the EC2 instances which will trigger your Auto Scaling group to scale out. The option that says: Install AWS SDK in the EC2 in stances. Create a script that will trigger the Auto Scaling event if there is a high memory usage is in correct because AWS SDK is a set of programming tools that allow you to create applications that ru n using Amazon cloud services. You would have to program the alert which is not the best strategy fo r this scenario. The option that says: Enable detailed monitoring on the instances is incorrect because detailed monitoring does not provide metrics for memory usag e. CloudWatch does not monitor memory usage in its default set of EC2 metrics and detailed monitoring just provides a higher frequency of metrics (1-minu te frequency). 3 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Modify the scaling policy to increase the threshold to scale out the number of instances is incorrect because you are already maxi ng out your usage, which should in effect cause an auto-scaling event. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/Install-CloudWatch-Agent.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /viewing_metrics_with_cloudwatch.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /monitoring_ec2.html Check out these Amazon EC2 and CloudWatch Cheat She ets: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ https://tutorialsdojo.com/amazon-cloudwatch/", "references": "" }, { "question": ": A technical lead of the Cloud Infrastructure team w as consulted by a software developer regarding the required AWS resources of the web application that he is building. The developer knows that an Instanc e Store only provides ephemeral storage where the dat a is automatically deleted when the instance is terminated. To ensure that the data of the web appl ication persists, the app should be launched in an EC2 instance that has a durable, block-level storage vo lume attached. The developer knows that they need t o use an EBS volume, but they are not sure what type they need to use. In this scenario, which of the following is true ab out Amazon EBS volume types and their respective us age? Certkingdom (Select TWO.)", "options": [ "A. Single root I/O virtualization (SR-IOV) volume s are suitable for a broad range of workloads,", "B. Provisioned IOPS volumes offer storage with co nsistent and low-latency performance, and", "D. General Purpose SSD (gp3) volumes with multi-a ttach enabled offer consistent and low-" ], "correct": "", "explanation": "Explanation Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. Certkingdom General Purpose (SSD) is the new, SSD-backed, gener al purpose EBS volume type that is recommended as the default choice for customers. General Purpose ( SSD) volumes are suitable for a broad range of workloads, including small to medium sized database s, development, and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with c onsistent and low-latency performance and are designed for I/O intensive applications such as lar ge relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS vol ume types. Magnetic volumes are ideal for workloads where data are accessed infrequently, and applications where the lowest storage cost is important. Take note that th is is a Previous Generation Volume. The latest low- cost magnetic storage types are Cold HDD (sc1) and Throu ghput Optimized HDD (st1) volumes. 5 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answers are: - Provisioned IOPS volumes offer storage with consi stent and low-latency performance, and are designed for I/O intensive applications such as lar ge relational or NoSQL databases. - Magnetic volumes provide the lowest cost per giga byte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important. The option that says: Spot volumes provide the lowe st cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infr equently, and applications where the lowest storage cost is important is incorrect because ther e is no EBS type called a \"Spot volume\" however, th ere is an Instance purchasing option for Spot Instances . The option that says: General Purpose SSD (gp3) vol umes with multi-attach enabled offer consistent and low-latency performance, and are designed for a pplications requiring multi-az resiliency is incorrect because the multi-attach feature can only be enabled on EBS Provisioned IOPS io2 or io1 volumes. In addition, multi-attach won't offer mult i-az resiliency because this feature only allows an EBS volume to be attached on multiple instances within an availability zone. The option that says: Single root I/O virtualizatio n (SR-IOV) volumes are suitable for a broad range of workloads, including small to medium-sized datab ases, development and test environments, and boot volumes is incorrect because SR-IOV is related with Enhanced Networking on Linux and not in EBS. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html Certkingdom https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonEBS.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A media company needs to configure an Amazon S3 buc ket to serve static assets for the public-facing we b application. Which methods ensure that all of the o bjects uploaded to the S3 bucket can be read public ly all over the Internet? (Select TWO.)", "options": [ "A. Create an IAM role to set the objects inside t he S3 bucket to public read.", "B. Grant public read access to the object when up loading it using the S3 Console. 6 of 130", "C. Configure the cross-origin resource sharing (C ORS) of the S3 bucket to allow objects to be", "D. Do nothing. Amazon S3 objects are already publ ic by default." ], "correct": "", "explanation": "Explanation By default, all Amazon S3 resources such as buckets , objects, and related subresources are private whi ch means that only the AWS account holder (resource ow ner) that created it has access to the resource. Th e resource owner can optionally grant access permissi ons to others by writing an access policy. In S3, y ou also set the permissions of the object during uploa d to make it public. Amazon S3 offers access policy options broadly cate gorized as resource-based policies and user policie s. Access policies you attach to your resources (bucke ts and objects) are referred to as resource-based p olicies. For example, bucket policies and access control lis ts (ACLs) are resource-based policies. You can also attach access policies to users in your account. Th ese are called user policies. You may choose to use resource-based policies, user policies, or some com bination of these to manage permissions to your Amazon S3 resources. You can also manage the public permissions of your objects during upload. Under Manage public permissions, you can grant read access to your obje cts to the general public (everyone in the world), for all of the files that you're uploading. Granting public read access is applicable to a small subset of use cases such as when buckets are used for websites. Certkingdom 7 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answers are: - Grant public read access to the object when uploa ding it using the S3 Console. Certkingdom - Configure the S3 bucket policy to set all objects to public read. The option that says: Configure the cross-origin re source sharing (CORS) of the S3 bucket to allow objects to be publicly accessible from all domains is incorrect. CORS will only allow objects from one domain (travel.cebu.com) to be loaded and accessibl e to a different domain (palawan.com). It won't necessarily expose objects for public access all ov er the internet. The option that says: Creating an IAM role to set t he objects inside the S3 bucket to public read is incorrect. You can create an IAM role and attach it to an EC2 instance in order to retrieve objects fr om the S3 bucket or add new ones. An IAM Role, in itself, cannot directly make the S3 objects public or chang e the permissions of each individual object. The option that says: Do nothing. Amazon S3 objects are already public by default is incorrect because , by default, all the S3 resources are private, so on ly the AWS account that created the resources can a ccess them. 8 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-a ccess-control.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Buc ketRestrictions.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A Fortune 500 company which has numerous offices an d customers around the globe has hired you as their Principal Architect. You have staff and customers t hat upload gigabytes to terabytes of data to a cent ralized S3 bucket from the regional data centers, across co ntinents, all over the world on a regular basis. At the end of the financial year, there are thousands of data being uploaded to the central S3 bucket which is in ap- southeast-2 (Sydney) region and a lot of employees are starting to complain about the slow upload time s. You were instructed by the CTO to resolve this issu e as soon as possible to avoid any delays in proces sing their global end of financial year (EOFY) reports. Which feature in Amazon S3 enables fast, easy, and secure transfer of your files over long distances between your client and your Amazon S3 bucket?", "options": [ "A. Cross-Region Replication", "B. Multipart Upload", "C. AWS Global Accelerator", "D. Transfer Acceleration" ], "correct": "D. Transfer Acceleration", "explanation": "Explanation Amazon S3 Transfer Acceleration enables fast, easy, and secure transfer of files over long distances between your client and your Amazon S3 bucket. Tran sfer Acceleration leverages Amazon CloudFront's globally distributed AWS Edge Locations. As data ar rives at an AWS Edge Location, data is routed to yo ur Amazon S3 bucket over an optimized network path. 9 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. Amazon S3 Transfer Acceleration can speed up conten t transfers to and from Amazon S3 by as much as certkingdom. 50-500% for long-distance transfer of larger object s. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over t he Internet. S3 Transfer Acceleration (S3TA) reduce s the variability in Internet routing, congestion and speeds that can affect transfers, and logically sh ortens the distance to S3 for remote applications. S3TA improv es transfer performance by routing traffic through Certkingdom Amazon CloudFront's globally distributed Edge Locat ions and over AWS backbone networks, and by using network protocol optimizations. certkingdom. Hence, Transfer Acceleration is the correct answer. AWS Global Accelerator is incorrect because this se rvice is primarily used to optimize the path from y our users to your applications which improves the perfo rmance of your TCP and UDP traffic. Using Amazon S3 Transfer Acceleration is a more suitable service for this scenario. certkingdom. Cross-Region Replication is incorrect because this simply enables you to automatically copy S3 objects from one bucket to another bucket that is placed in a different AWS Region or within the same Region. Multipart Upload is incorrect because this feature simply allows you to upload a single object as a se t of parts. You can upload these object parts independen tly and in any order. If transmission of any part f ails, you can retransmit that part without affecting othe r parts. After all parts of your object are uploade d, 10 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon S3 assembles these parts and creates the obj ect. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. References: https://aws.amazon.com/s3/faqs/ https://aws.amazon.com/s3/transfer-acceleration/ certkingdom. Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company has a web-based order processing system t hat is currently using a standard queue in Amazon certkingdom. SQS. The IT Manager noticed that there are a lot of cases where an order was processed twice. This iss ue has caused a lot of trouble in processing and made the customers very unhappy. The manager has asked you to ensure that this issue will not recur. What can you do to prevent this from happening agai n in the future? (Select TWO.)", "options": [ "A. Alter the visibility timeout of SQS.", "B. Alter the retention period in Amazon SQS.", "C. Replace Amazon SQS and instead, use Amazon Sim ple Workflow service.", "D. Use an Amazon SQS FIFO Queue instead." ], "correct": "", "explanation": "Explanation certkingdom. Amazon SQS FIFO (First-In-First-Out) Queues have al l the capabilities of the standard queue with additional capabilities designed to enhance messagi ng between applications when the order of operation s and events is critical, or where duplicates can't b e tolerated, for example: - Ensure that user-entered commands are executed in the right order. - Display the correct product pri ce by certkingdom. sending price modifications in the right order. - P revent a student from enrolling in a course before registering for an account. 11 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. certkingdom. Certkingdom Amazon SWF provides useful guarantees around task a ssignments. It ensures that a task is never duplicated and is assigned only once. Thus, even th ough you may have multiple workers for a particular certkingdom. activity type (or a number of instances of a decide r), Amazon SWF will give a specific task to only on e worker (or one decider instance). Additionally, Ama zon SWF keeps at most one decision task outstanding at a time for a workflow execution. Thus, you can r un multiple decider instances without worrying abou t two instances operating on the same execution simul taneously. These facilities enable you to coordinat e your workflow without worrying about duplicate, los t, or conflicting tasks. The main issue in this scenario is that the order m anagement system produces duplicate orders at times . Since the company is using SQS, there is a possibil ity that a message can have a duplicate in case an EC2 certkingdom. instance failed to delete the already processed message. To prevent this issue from happening, you have to use Amazon Simple Workflow service instead of SQS. 12 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Therefore, the correct answers are: - Replace Amazon SQS and instead, use Amazon Simple Workflow service. - Use an Amazon SQS FIFO Queue instead. Altering the retention period in Amazon SQS is inco rrect because the retention period simply specifies if the Amazon SQS should delete the messages that have been in a queue for a certain period of time. Altering the visibility timeout of SQS is incorrect because for standard queues, the visibility timeou t isn't certkingdom. a guarantee against receiving a message twice. To a void duplicate SQS messages, it is better to design your applications to be idempotent (they should not be a ffected adversely when processing the same message more than once). Changing the message size in SQS is incorrect becau se this is not related at all in this scenario. References: certkingdom. https://aws.amazon.com/swf/faqs/ https://aws.amazon.com/swf/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-visibility- timeout.html certkingdom. Check out this Amazon SWF Cheat Sheet: https://tutorialsdojo.com/amazon-simple-workflow-am azon-swf/ Certkingdom Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/ certkingdom.", "references": "" }, { "question": ": A startup plans to develop a multiplayer game that uses UDP as the protocol for communication between clients and game servers. The data of the users wil l be stored in a key-value store. As the Solutions Architect, you need to implement a solution that wi ll distribute the traffic across a number of server s. Which of the following could help you achieve this requirement? certkingdom.", "options": [ "A. Distribute the traffic using Network Load Bala ncer and store the data in Amazon", "B. Distribute the traffic using Application Load Balancer and store the data in Amazon RDS.", "C. Distribute the traffic using Network Load Bala ncer and store the data in Amazon Aurora.", "D. Distribute the traffic using Application Load Balancer and store the data in Amazon" ], "correct": "A. Distribute the traffic using Network Load Bala ncer and store the data in Amazon", "explanation": "Explanation A Network Load Balancer functions at the fourth lay er of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. Afte r the load balancer receives a connection request, it selects a target from the target group for the defa ult rule. For UDP traffic, the load balancer select s a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP addr ess, and destination port. A UDP flow has the same sourc e and destination, so it is consistently routed to a single target throughout its lifetime. Different UD P flows have different source IP addresses and port s, so they can be routed to different targets. Certkingdom 14 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom In this scenario, a startup plans to create a multi player game that uses UDP as the protocol for communications. Since UDP is a Layer 4 traffic, we can limit the option that uses Network Load Balance r. 15 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The data of the users will be stored in a key-value store. This means that we should select Amazon DynamoDB since it supports both document and key-va lue store models. Hence, the correct answer is: Distribute the traffi c using Network Load Balancer and store the data in Amazon DynamoDB. The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon DynamoDB is incorrect because UDP is not sup ported in Application Load Balancer. Remember that UDP is a Layer 4 traffic. Therefore, you shoul d use a Network Load Balancer. The option that says: Distribute the traffic using Network Load Balancer and store the data in Amazon Aurora is incorrect because Amazon Aurora is a relational database service. Instead of Aurora, you should use Amazon DynamoDB. The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon RDS is incorrect because Application Load Ba lancer only supports application traffic (Layer 7). Also, Amazon RDS is not suitable as a key-value sto re. You should use DynamoDB since it supports both document and key-value store models. References: https://aws.amazon.com/blogs/aws/new-udp-load-balan cing-for-network-load-balancer/ https://docs.aws.amazon.com/elasticloadbalancing/la test/network/introduction.html Check out this AWS Elastic Load Balancing Cheat She et: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Certkingdom", "references": "" }, { "question": ": An online trading platform with thousands of client s across the globe is hosted in AWS. To reduce late ncy, you have to direct user traffic to the nearest appl ication endpoint to the client. The traffic should be routed to the closest edge location via an Anycast static IP address. AWS Shield should also be integrated in to the solution for DDoS protection. Which of the following is the MOST suitable service that the Solutions Architect should use to satisfy the above requirements?", "options": [ "A. AWS WAF", "B. Amazon CloudFront", "C. AWS PrivateLink", "D. AWS Global Accelerator" ], "correct": "D. AWS Global Accelerator", "explanation": "Explanation AWS Global Accelerator is a service that improves t he availability and performance of your application s with local or global users. It provides static IP a ddresses that act as a fixed entry point to your ap plication endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. AWS Global Accelerator uses the AWS global network to optimize the path from your users to your certkingdom. applications, improving the performance of your TCP and UDP traffic. AWS Global Accelerator continually monitors the health of your application endpoints and will detect an unhealthy endpoint an d redirect traffic to healthy endpoints in less than 1 minute. certkingdom. certkingdom. Many applications, such as gaming, media, mobile ap plications, and financial applications, need very l ow Certkingdom latency for a great user experience. To improve the user experience, AWS Global Accelerator directs us er traffic to the nearest application endpoint to the client, thus reducing internet latency and jitter. It routes the traffic to the closest edge location via Anycast, t hen by routing it to the closest regional endpoint over the AWS global network. AWS Global Accelerator quickly reacts to changes in network performance to improve your users' application performance. certkingdom. AWS Global Accelerator and Amazon CloudFront are se parate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (su ch as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good f it for non-HTTP use cases, such as gaming (UDP), IoT ( MQTT), or Voice over IP, as well as for HTTP use certkingdom. cases that specifically require static IP addresses or deterministic, fast regional failover. Both ser vices integrate with AWS Shield for DDoS protection. 17 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is AWS Global Accelerator . Amazon CloudFront is incorrect because although thi s service uses edge locations, it doesn't have the capability to route the traffic to the closest edge location via an Anycast static IP address. AWS WAF is incorrect because the this service is ju st a web application firewall that helps protect yo ur web applications or APIs against common web exploit s that may affect availability, compromise security , or consume excessive resources AWS PrivateLink is incorrect because this service s imply provides private connectivity between VPCs, AWS services, and on-premises applications, securel y on the Amazon network. It doesn't route traffic t o the closest edge location via an Anycast static IP address. References: https://aws.amazon.com/global-accelerator/ https://aws.amazon.com/global-accelerator/faqs/ Check out this AWS Global Accelerator Cheat Sheet: https://tutorialsdojo.com/aws-global-accelerator/", "references": "" }, { "question": ": A company launched an online platform that allows p eople to easily buy, sell, spend, and manage their cryptocurrency. To meet the strict IT audit require ments, each of the API calls on all of the AWS reso urces should be properly captured and recorded. You used CloudTrail in the VPC to help you in the compliance , Certkingdom operational auditing, and risk auditing of your AWS account. In this scenario, where does CloudTrail store all o f the logs that it creates?", "options": [ "A. DynamoDB", "B. Amazon S3", "C. Amazon Redshift", "D. A RDS instance" ], "correct": "B. Amazon S3", "explanation": "Explanation 18 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam CloudTrail is enabled on your AWS account when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily view events in the CloudTrail console by going to Event history. Certkingdom Event history allows you to view, search, and downl oad the past 90 days of supported activity in your AWS account. In addition, you can create a CloudTrail t rail to further archive, analyze, and respond to ch anges in your AWS resources. A trail is a configuration that enables the delivery of events to an Amazon S3 buc ket that you specify. You can also deliver and analyze events in a trail with Amazon CloudWatch Logs and Amazon CloudWatch Events. You can create a trail wi th the CloudTrail console, the AWS CLI, or the CloudTrail API. The rest of the answers are incorrect. DynamoDB and an RDS instance are for database; Amazon Redshift is used for data warehouse that scales hor izontally and allows you to store terabytes and pet abytes of data. 19 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/how-cloudtrail-works.html https://aws.amazon.com/cloudtrail/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", "references": "" }, { "question": ": An application is using a RESTful API hosted in AWS which uses Amazon API Gateway and AWS Lambda. There is a requirement to trace and analyze user requests as they travel through your Amazon A PI Gateway APIs to the underlying services. Which of the following is the most suitable service to use to meet this requirement?", "options": [ "A. CloudWatch", "B. CloudTrail", "C. AWS X-Ray", "D. VPC Flow Logs" ], "correct": "C. AWS X-Ray", "explanation": "Explanation Certkingdom You can use AWS X-Ray to trace and analyze user req uests as they travel through your Amazon API Gateway APIs to the underlying services. API Gatewa y supports AWS X-Ray tracing for all API Gateway endpoint types: regional, edge-optimized, and priva te. You can use AWS X-Ray with Amazon API Gateway in all regions where X-Ray is available. X-Ray gives you an end-to-end view of an entire req uest, so you can analyze latencies in your APIs and their backend services. You can use an X-Ray servic e map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray. And you can configure sampling rules to tel l X- Ray which requests to record, at what sampling rate s, according to criteria that you specify. If you c all an API Gateway API from a service that's already being traced, API Gateway passes the trace through, even if X-Ray tracing is not enabled on the API. You can enable X-Ray for an API stage by using the API Gateway management console, or by using the API Gateway API or CLI. 20 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam VPC Flow Logs is incorrect because this is a featur e that enables you to capture information about the IP traffic going to and from network interfaces in you r entire VPC. Although it can capture some details about the incoming user requests, it is still better to u se AWS X-Ray as it provides a better way to debug a nd analyze your microservices applications with reques t tracing so you can find the root cause of your is sues and performance. CloudWatch is incorrect because this is a monitorin g and management service. It does not have the capability to trace and analyze user requests as th ey travel through your Amazon API Gateway APIs. CloudTrail is incorrect because this is primarily u sed for IT audits and API logging of all of your AW S resources. It does not have the capability to trace and analyze user requests as they travel through y our Certkingdom Amazon API Gateway APIs, unlike AWS X-Ray.", "references": "https://docs.aws.amazon.com/apigateway/latest/devel operguide/apigateway-xray.html Check out this AWS X-Ray Cheat Sheet: https://tutorialsdojo.com/aws-x-ray/ Instrumenting your Application with AWS X-Ray: https://tutorialsdojo.com/instrumenting-your-applic ation-with-aws-x-ray/" }, { "question": ": 21 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A real-time data analytics application is using AWS Lambda to process data and store results in JSON format to an S3 bucket. To speed up the existing wo rkflow, you have to use a service where you can run sophisticated Big Data analytics on your data witho ut moving them into a separate analytics system. Which of the following group of services can you us e to meet this requirement?", "options": [ "A. Amazon X-Ray, Amazon Neptune, DynamoDB", "B. S3 Select, Amazon Neptune, DynamoDB DAX", "C. Amazon Glue, Glacier Select, Amazon Redshift", "D. S3 Select, Amazon Athena, Amazon Redshift Spec trum" ], "correct": "D. S3 Select, Amazon Athena, Amazon Redshift Spec trum", "explanation": "Explanation Amazon S3 allows you to run sophisticated Big Data analytics on your data without moving the data into a separate analytics system. In AWS, there is a suite of tools that make analyzing and processing large amounts of data in the cloud faster, including ways to optimize and integrate existing workflows with Amazon S3: 1. S3 Select Amazon S3 Select is designed to help analyze and pr ocess data within an object in Amazon S3 buckets, faster and cheaper. It works by providing the abili ty to retrieve a subset of data from an object in A mazon S3 using simple SQL expressions. Your applications no longer have to use compute resources to scan and filter the data from an object, potentially increas ing query performance by up to 400%, and reducing q uery Certkingdom costs as much as 80%. You simply change your applic ation to use SELECT instead of GET to take advantage of S3 Select. 2. Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL expressions. Athena is serverless, so there is no infrastructure to manage, and you pay o nly for the queries you run. Athena is easy to use. Sim ply point to your data in Amazon S3, define the sch ema, and start querying using standard SQL expressions. Most results are delivered within seconds. With Ath ena, there's no need for complex ETL jobs to prepare you r data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. 3. Amazon Redshift Spectrum 22 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon Redshift also includes Redshift Spectrum, al lowing you to directly run SQL queries against exabytes of unstructured data in Amazon S3. No load ing or transformation is required, and you can use open data formats, including Avro, CSV, Grok, ORC, Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data being retrieved, so queries against Amazon S3 run f ast, regardless of data set size.", "references": "https://aws.amazon.com/s3/features/#Query_in_Place Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out these AWS Cheat Sheets: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-athena/ https://tutorialsdojo.com/amazon-redshift/" }, { "question": ": A company has a High Performance Computing (HPC) cl uster that is composed of EC2 Instances with Provisioned IOPS volume to process transaction-inte nsive, low-latency workloads. The Solutions Archite ct must maintain high IOPS while keeping the latency d own by setting the optimal queue length for the volume. The size of each volume is 10 GiB. Certkingdom Which of the following is the MOST suitable configu ration that the Architect should set up?", "options": [ "A. Set the IOPS to 400 then maintain a low queue length.", "B. Set the IOPS to 500 then maintain a low queue length.", "C. Set the IOPS to 800 then maintain a low queue length.", "D. Set the IOPS to 600 then maintain a high queue length." ], "correct": "B. Set the IOPS to 500 then maintain a low queue length.", "explanation": "Explanation Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Unlike gp2 , 23 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam which uses a bucket and credit model to calculate p erformance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, an d Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the ti me over a given year. An io1 volume can range in size from 4 GiB to 16 Ti B. You can provision from 100 IOPS up to 64,000 IOPS per volume on Nitro system instance families a nd up to 32,000 on other instance families. The maximum ratio of provisioned IOPS to requested volu me size (in GiB) is 50:1. For example, a 100 GiB volume can be provisioned wi th up to 5,000 IOPS. On a supported instance type, any volume 1,280 GiB in size or greater allows prov isioning up to the 64,000 IOPS maximum (50 \u00d7 1,280 GiB = 64,000). An io1 volume provisioned with up to 32,000 IOPS su pports a maximum I/O size of 256 KiB and yields as much as 500 MiB/s of throughput. With the I/O size at the maximum, peak throughput is reached at 2,000 Certkingdom IOPS. A volume provisioned with more than 32,000 IO PS (up to the cap of 64,000 IOPS) supports a maximum I/O size of 16 KiB and yields as much as 1, 000 MiB/s of throughput. The volume queue length is the number of pending I/ O requests for a device. Latency is the true end-to -end client time of an I/O operation, in other words, th e time elapsed between sending an I/O to EBS and receiving an acknowledgement from EBS that the I/O read or write is complete. Queue length must be correctly calibrated with I/O size and latency to a void creating bottlenecks either on the guest opera ting system or on the network link to EBS. Optimal queue length varies for each workload, depe nding on your particular application's sensitivity to IOPS and latency. If your workload is not deliverin g enough I/O requests to fully use the performance available to your EBS volume then your volume might not deliver the IOPS or throughput that you have provisioned. 24 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Transaction-intensive applications are sensitive to increased I/O latency and are well-suited for SSD- backed io1 and gp2 volumes. You can maintain high I OPS while keeping latency down by maintaining a low queue length and a high number of IOPS availabl e to the volume. Consistently driving more IOPS to a volume than it has available can cause increased I/ O latency. Throughput-intensive applications are less sensitiv e to increased I/O latency, and are well-suited for HDD- backed st1 and sc1 volumes. You can maintain high t hroughput to HDD-backed volumes by maintaining a high queue length when performing large, sequential I/O. Therefore, for instance, a 10 GiB volume can be pro visioned with up to 500 IOPS. Any volume 640 GiB in size or greater allows provisioning up to a maximum of 32,000 IOPS (50 \u00d7 640 GiB = 32,000). Hence, the correct answer is to set the IOPS to 500 then maint ain a low queue length. Setting the IOPS to 400 then maintaining a low queu e length is incorrect because although a value of 400 is an acceptable value, it is not the maximum v alue for the IOPS. You will not fully utilize the available IOPS that the volume can offer if you jus t set it to 400. The options that say: Set the IOPS to 600 then main tain a high queue length and Set the IOPS to 800 then maintain a low queue length are both incorrect because the maximum IOPS for the 10 GiB volume is only 500. Therefore, any value greater than the maximum amount, such as 600 or 800, is wrong. Moreover, you should keep the latency down by maint aining a low queue length, and not higher. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ EBSVolumeTypes.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-io-characteristics.html Certkingdom Amazon EBS Overview - SSD vs HDD: https://youtube.com/watch?v=LW7x8wyLFvw&t=8s Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A Solutions Architect is designing the cloud archit ecture for the enterprise application suite of the company. Both the web and application tiers need to access t he Internet to fetch data from public APIs. However , these servers should be inaccessible from the Inter net. Which of the following steps should the Architect i mplement to meet the above requirements? 25 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "A. Deploy the web and application tier instances to a public subnet and then allocate an Elastic", "B. Deploy the web and application tier instances to a private subnet and then allocate an", "C. Deploy a NAT gateway in the private subnet and add a route to it from the public subnet", "D. Deploy a NAT gateway in the public subnet and add a route to it from the private subnet" ], "correct": "D. Deploy a NAT gateway in the public subnet and add a route to it from the private subnet", "explanation": "Explanation You can use a network address translation (NAT) gat eway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection w ith those instances. You are charged for creating and u sing a NAT gateway in your account. NAT gateway hourly usage and data processing rates apply. Amazon EC2 charges for data transfer also apply. NAT gateways are not supported for IPv6 traf fic--use an egress-only internet gateway instead. Certkingdom 26 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam To create a NAT gateway, you must specify the publi c subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to asso ciate with the NAT gateway when you create it. The Elastic IP address cannot be changed once you assoc iate it with the NAT Gateway. After you've created a NAT gateway, you must update the route table associated with one or more of you r private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your pri vate subnets to communicate with the internet. Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone. You have a limit on the number of NAT gateways you can create in an Availability Zone. Hence, the correct answer is to deploy a NAT gatewa y in the public subnet and add a route to it from the private subnet where the web and application ti ers are hosted. Deploying the web and application tier instances to a private subnet and then allocating an Elastic IPaddress to each EC2 instance is incorrect because a n Elastic IP address is just a static, public IPv4 address. In this scenario, you have to use a NAT Gateway ins tead. Deploying a NAT gateway in the private subnet and a dding a route to it from the public subnet where the web and application tiers are hosted is i ncorrect because you have to deploy a NAT gateway in the public subnet instead and not on a private o ne. Deploying the web and application tier instances to a public subnet and then allocating an Elastic IP address to each EC2 instance is incorrect because h aving an EIP address is irrelevant as it is only a static, public IPv4 address. Moreover, you should deploy th e web and application tier in the private subnet in stead of a public subnet to make it inaccessible from the Internet and then just add a NAT Gateway to allow outbound Internet connection. Reference: Certkingdo m https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", "references": "" }, { "question": ": 27 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A company has a web application hosted in AWS cloud where the application logs are sent to Amazon CloudWatch. Lately, the web application has recentl y been encountering some errors which can be resolved simply by restarting the instance. What will you do to automatically restart the EC2 i nstances whenever the same application error occurs ? A. First, look at the existing CloudWatch logs fo r keywords related to the application error to create a custom metric. Then, create a CloudWatch a larm for that custom metric which invokes an action to restart the EC2 instance.", "options": [ "B. First, look at the existing CloudWatch logs fo r keywords related to the application error to", "C. First, look at the existing Flow logs for keyw ords related to the application error to create a", "D. First, look at the existing Flow logs for keyw ords related to the application error to create a" ], "correct": "", "explanation": "Explanation In this scenario, you can look at the existing Clou dWatch logs for keywords related to the application error to create a custom metric. Then, create a CloudWatc h alarm for that custom metric which invokes an act ion to restart the EC2 instance. Certkingdom 28 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom You can create alarms that automatically stop, term inate, reboot, or recover your EC2 instances using Amazon CloudWatch alarm actions. You can use the st op or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover the m onto new hardware if a system impairment occurs. Hence, the correct answer is: First, look at the ex isting CloudWatch logs for keywords related to the application error to create a custom metric. Then, create a CloudWatch alarm for that custom metric which invokes an action to restart the EC2 i nstance. The option that says: First, look at the existing C loudWatch logs for keywords related to the application error to create a custom metric. Then, create an alarm in Amazon SNS for that custom 29 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam metric which invokes an action to restart the EC2 i nstance is incorrect because you can't create an alarm in Amazon SNS. The following options are incorrect because Flow Lo gs are used in VPC and not on specific EC2 instance : - First, look at the existing Flow logs for keyword s related to the application error to create a cust om metric. Then, create a CloudWatch alarm for that cu stom metric which invokes an action to restart the EC2 instance. First, look at the existing Flow logs for keywords related to the application error to create a custom metric. Then, create a CloudWatch alarm for that cu stom metric which calls a Lambda function that invokes an action to restart the EC2 instance.", "references": "https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/UsingAlarmActions.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/" }, { "question": ": A company decided to change its third-party data an alytics tool to a cheaper solution. They sent a ful l data export on a CSV file which contains all of their an alytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do so me validation on the provided data export. In this scenario, what is the most cost-effective a nd easiest way to analyze export data using standar d SQL? Certkingdom", "options": [ "A. Create a migration tool to load the CSV export file from S3 to a DynamoDB instance. Once", "B. To be able to run SQL queries, use AWS Athena to analyze the export data file in S3.", "C. Use a migration tool to load the CSV export fi le from S3 to a database that is designed for", "D. Use mysqldump client utility to load the CSV e xport file from S3 to a MySQL RDS instance." ], "correct": "B. To be able to run SQL queries, use AWS Athena to analyze the export data file in S3.", "explanation": "Explanation 30 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard S QL. With a few actions in the AWS Management Console, you can point Athena at your data stored i n Amazon S3 and begin using standard SQL to run ad- hoc queries and get results in seconds. Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the quer ies you run. Athena scales automatically--executing queries in parallel--so results are fast, even with large datasets and complex queries. Athena helps you analyze unstructured, semi-structu red, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data format s such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL , without the need to aggregate or load the data in to Athena. Hence, the correct answer is: To be able to run SQL queries, use Amazon Athena to analyze the export Certkingdom data file in S3. The rest of the options are all incorrect because i t is not necessary to set up a database to be able to analyze the CSV export file. You can use a cost-effective o ption (AWS Athena), which is a serverless service t hat enables you to pay only for the queries you run.", "references": "https://docs.aws.amazon.com/athena/latest/ug/what-i s.html Check out this Amazon Athena Cheat Sheet: https://tutorialsdojo.com/amazon-athena/" }, { "question": ": 31 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A company has hundreds of VPCs with multiple VPN co nnections to their data centers spanning 5 AWS Regions. As the number of its workloads grows, the company must be able to scale its networks across multiple accounts and VPCs to keep up. A Solutions Architect is tasked to interconnect all of the company's on-premises networks, VPNs, and VPCs into a single gateway, which includes support for inter - region peering across multiple AWS regions. Which of the following is the BEST solution that th e architect should set up to support the required interconnectivity?", "options": [ "A. Set up an AWS VPN CloudHub for inter-region VP C access and a Direct Connect gateway", "B. Set up an AWS Direct Connect Gateway to achiev e inter-region VPC access to all of the", "C. Enable inter-region VPC peering that allows pe ering relationships to be established between", "D. Set up an AWS Transit Gateway in each region t o interconnect all networks within it. Then," ], "correct": "D. Set up an AWS Transit Gateway in each region t o interconnect all networks within it. Then,", "explanation": "Explanation AWS Transit Gateway is a service that enables custo mers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single g ateway. As you grow the number of workloads running on AWS, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth. Today, you can connect pairs of Amazon VPCs using p eering. However, managing point-to-point connectivity across many Amazon VPCs without the ab ility to centrally manage the connectivity policies can be operationally costly and cumbersome. For on- premises connectivity, you need to attach your AWS VPN to each individual Amazon VPC. This solution ca n be time-consuming to build and hard to manage when the number of VPCs grows into the hundreds. 32 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam With AWS Transit Gateway, you only have to create a nd manage a single connection from the central gateway to each Amazon VPC, on-premises data center , or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act l ike spokes. This hub and spoke model significantly simp lifies management and reduces operational costs because each network only has to connect to the Tra nsit Gateway and not to every other network. Any ne w VPC is simply connected to the Transit Gateway and is then automatically available to every other netw ork that is connected to the Transit Gateway. This ease of connectivity makes it easy to scale your networ k as you grow. certkingdom. certkingdom. certkingdom. Certkingdom certkingdom. certkingdom. It acts as a Regional virtual router for traffic fl owing between your virtual private clouds (VPC) and VPN connections. A transit gateway scales elastically b ased on the volume of network traffic. Routing thro ugh a transit gateway operates at layer 3, where the pack ets are sent to a specific next-hop attachment, bas ed on their destination IP addresses. 33 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A transit gateway attachment is both a source and a destination of packets. You can attach the followi ng resources to your transit gateway: - One or more VPCs - One or more VPN connections - One or more AWS Direct Connect gateways - One or more transit gateway peering connections If you attach a transit gateway peering connection, the transit gateway must be in a different Region. Hence, the correct answer is: Set up an AWS Transit Gateway in each region to interconnect all networks within it. Then, route traffic between the transit gateways through a peering connection. The option that says: Set up an AWS Direct Connect Gateway to achieve inter-region VPC access to all of the AWS resources and on-premises data cente rs. Set up a link aggregation group (LAG) to aggregate multiple connections at a single AWS Dire ct Connect endpoint in order to treat them as a single, managed connection. Launch a virtual privat e gateway in each VPC and then create a public virtual interface for each AWS Direct Connect conne ction to the Direct Connect Gateway is incorrect. You can only create a private virtual interface to a Direct Connect gateway and not a public virtual interface. Using a link aggregation group (LAG) is also irrelevant in this scenario because it is just a logical interface that uses the Link Aggregation Control Pr otocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. The option that says: Enable inter-region VPC peeri ng which allows peering relationships to be established between VPCs across different AWS regio ns. This will ensure that the traffic will always Certkingdom stay on the global AWS backbone and will never trav erse the public Internet is incorrect. This would require a lot of manual set up and management overh ead to successfully build a functional, error-free inter- region VPC network compared with just using a Trans it Gateway. Although the Inter-Region VPC Peering provides a cost-effective way to share resources be tween regions or replicate data for geographic redundancy, its connections are not dedicated and h ighly available. Moreover, it doesn't support the company's on-premises data centers in multiple AWS Regions. The option that says: Set up an AWS VPN CloudHub fo r inter-region VPC access and a Direct Connect gateway for the VPN connections to the on-p remises data centers. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway is incorre ct. This option doesn't meet the requirement of interconnecting all of the company's on-premises ne tworks, VPNs, and VPCs into a single gateway, whichincludes support for inter-region peering across mu ltiple AWS regions. As its name implies, the AWS VP N CloudHub is only for VPNs and not for VPCs. It is a lso not capable of managing hundreds of VPCs with multiple VPN connections to their data centers that span multiple AWS Regions. 34 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://aws.amazon.com/transit-gateway/ https://docs.aws.amazon.com/vpc/latest/tgw/how-tran sit-gateways-work.html https://aws.amazon.com/blogs/networking-and-content -delivery/building-a-global-network-using-aws- transit-gateway-inter-region-peering/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", "references": "" }, { "question": ": A popular augmented reality (AR) mobile game is hea vily using a RESTful API which is hosted in AWS. The API uses Amazon API Gateway and a DynamoDB tabl e with a preconfigured read and write capacity. Based on your systems monitoring, the DynamoDB tabl e begins to throttle requests during high peak load s which causes the slow performance of the game. Which of the following can you do to improve the pe rformance of your app?", "options": [ "A. Add the DynamoDB table to an Auto Scaling Grou p.", "B. Create an SQS queue in front of the DynamoDB t able.", "C. Integrate an Application Load Balancer with yo ur DynamoDB table.", "D. Use DynamoDB Auto Scaling" ], "correct": "D. Use DynamoDB Auto Scaling", "explanation": "Explanation DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisi oned read and write capacity to handle sudden incre ases in traffic, without throttling. When the workload d ecreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisi oned capacity. Using DynamoDB Auto Scaling is the best answer. Dyn amoDB Auto Scaling uses the AWS Application Auto Scaling service to dynamically adjust provisio ned throughput capacity on your behalf. 35 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Integrating an Application Load Balancer with your DynamoDB table is incorrect because an Application Load Balancer is not suitable to be use d with DynamoDB and in addition, this will not incr ease the throughput of your DynamoDB table. Adding the DynamoDB table to an Auto Scaling Group is incorrect because you usually put EC2 instances on an Auto Scaling Group, and not a Dynam oDB table. Creating an SQS queue in front of the DynamoDB tabl e is incorrect because this is not a design principle for high throughput DynamoDB table. Using SQS is for handling queuing and polling the reques t. This will not increase the throughput of DynamoDB w hich is required in this situation.", "references": "https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/AutoScaling.html Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://youtube.com/watch?v=3ZOyUNIeorU" }, { "question": ": A new company policy requires IAM users to change t heir passwords' minimum length to 12 characters. After a random inspection, you found out that there are still employees who do not follow the policy. Certkingdom How can you automatically check and evaluate whethe r the current password policy for an account complies with the company password policy?", "options": [ "A. Create a Scheduled Lambda Function that will r un a custom script to check compliance", "B. Create a CloudTrail trail. Filter the result b y setting the attribute to \"Event Name\" and", "C. Create a rule in the Amazon CloudWatch event. Build an event pattern to match events on", "D. Configure AWS Config to trigger an evaluation that will check the compliance for a user's" ], "correct": "D. Configure AWS Config to trigger an evaluation that will check the compliance for a user's", "explanation": "Explanation AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. In the scenario given, we can utilize AWS Config to check for compliance on the password policy by configuring the Config rule to check the IAM_PASSWO RD_POLICY on an account. Additionally, because Config integrates with AWS Organizations, w e can improve the set up to aggregate compliance Certkingdom information across accounts to a central dashboard. Hence, the correct answer is: Configure AWS Config to trigger an evaluation that will check the compliance for a user's password periodically. Create a CloudTrail trail. Filter the result by set ting the attribute to \"Event Name\" and lookup value to \"ChangePassword\". This easily gives you the list of users who have made changes to their passwords is incorrect because this setup will just give you the name of the users who have made chang es to their respective passwords. It will not give you the ability to check whether their passwords have met the required minimum length. Create a Scheduled Lambda function that will run a custom script to check compliance against changes made to the passwords periodically is a val id solution but still incorrect. AWS Config is alre ady integrated with AWS Lambda. You don't have to creat e and manage your own Lambda function. You just have to define a Config rule where you will check c ompliance, and Lambda will process the evaluation. 37 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Moreover, you can't directly create a scheduled fun ction by using Lambda itself. You have to create a rule in AWS CloudWatch Events to run the Lambda function s on the schedule that you define. Create a rule in the Amazon CloudWatch event. Build an event pattern to match events on IAM. Set the event name to \"ChangePassword\" in the event pat tern. Configure SNS to send notifications to you whenever a user has made changes to his passwor d is incorrect because this setup will just alert y ou whenever a user changes his password. Sure, you'll have information about w ho made changes, but that is not enough to check whether it complies with the required minimum passw ord length. This can be easily done in AWS Config. References: https://docs.aws.amazon.com/config/latest/developer guide/evaluate-config-rules.html https://aws.amazon.com/config/ Check out this AWS Config Cheat Sheet: https://tutorialsdojo.com/aws-config/", "references": "" }, { "question": ": A company has stored 200 TB of backup files in Amaz on S3. The files are in a vendor-proprietary format . The Solutions Architect needs to use the vendor's p roprietary file conversion software to retrieve the files from their Amazon S3 bucket, transform the files to an industry-standard format, and re-upload the fil es back to Amazon S3. The solution must minimize the d ata transfer costs. Certkingdom Which of the following options can satisfy the give n requirement?", "options": [ "A. Export the data using AWS Snowball Edge device . Install the file conversion software on", "B. Deploy the EC2 instance in a different Region. Install the conversion software on the", "C. Install the file conversion software in Amazon S3. Use S3 Batch Operations to perform data", "D. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion" ], "correct": "D. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion", "explanation": "Explanation 38 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon S3 is object storage built to store and retr ieve any amount of data from anywhere on the Intern et. It's a simple storage service that offers industry- leading durability, availability, performance, secu rity, and virtually unlimited scalability at very low costs. Amazon S3 is also designed to be highly flexible. S tore any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP app lication or a sophisticated web application. You pay for all bandwidth into and out of Amazon S3 , except for the following: - Data transferred in from the Internet. - Data transferred out to an Amazon EC2 instance, w hen the instance is in the same AWS Region as the S 3 Certkingdom bucket (including to a different account in the sam e AWS region). - Data transferred out to Amazon CloudFront. To minimize the data transfer charges, you need to deploy the EC2 instance in the same Region as Amazo n S3. Take note that there is no data transfer cost b etween S3 and EC2 in the same AWS Region. Install t he conversion software on the instance to perform data transformation and re-upload the data to Amazon S3 . Hence, the correct answer is: Deploy the EC2 instan ce in the same Region as Amazon S3. Install the file conversion software on the instance. Perform d ata transformation and re-upload it to Amazon S3. The option that says: Install the file conversion s oftware in Amazon S3. Use S3 Batch Operations to perform data transformation is incorrect because it is not possible to install the software in Amazon S3. The S3 Batch Operations just runs multiple S3 opera tions in a single request. It can't be integrated w ith your conversion software. 39 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Export the data using AWS Sno wball Edge device. Install the file conversion software on the device. Transform the data and re-u pload it to Amazon S3 is incorrect. Although this is possible, it is not mentioned in the scenario th at the company has an on-premises data center. Thus , there's no need for Snowball. The option that says: Deploy the EC2 instance in a different Region. Install the file conversion software on the instance. Perform data transformati on and re-upload it to Amazon S3 is incorrect because this approach wouldn't minimize the data tr ansfer costs. You should deploy the instance in the same Region as Amazon S3. References: https://aws.amazon.com/s3/pricing/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /AmazonS3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A web application requires a minimum of six Amazon Elastic Compute Cloud (EC2) instances running at all times. You are tasked to deploy the application to three availability zones in the EU Ireland regi on (eu- west-1a, eu-west-1b, and eu-west-1c). It is require d that the system is fault-tolerant up to the loss of one Availability Zone. Which of the following setup is the most cost-effec tive solution which also maintains the fault-tolera nce of Certkingdom your system?", "options": [ "A. 2 instances in eu-west-1a, 2 instances in eu-w est-1b, and 2 instances in eu-west-1c", "B. 6 instances in eu-west-1a, 6 instances in eu-w est-1b, and no instances in eu-west-1c", "C. 6 instances in eu-west-1a, 6 instances in eu-w est-1b, and 6 instances in eu-west-1c", "D. 3 instances in eu-west-1a, 3 instances in eu-w est-1b, and 3 instances in eu-west-1c" ], "correct": "D. 3 instances in eu-west-1a, 3 instances in eu-w est-1b, and 3 instances in eu-west-1c", "explanation": "Explanation Basically, fault-tolerance is the ability of a syst em to remain in operation even in the event that so me of its components fail, without any service degradation. I n AWS, it can also refer to the minimum number of running EC2 instances or resources which should be running at all times in order for the system to pro perly 40 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam operate and serve its consumers. Take note that thi s is quite different from the concept of High Avail ability, which is just concerned with having at least one ru nning instance or resource in case of failure. In this scenario, 3 instances in eu-west-1a, 3 inst ances in eu-west-1b, and 3 instances in eu-west-1c is the correct answer because even if there was an out age in one of the Availability Zones, the system st ill satisfies the requirement of having a minimum of 6 running instances. It is also the most cost-effecti ve solution among other options. The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and 6 instances in eu-west -1c is incorrect because although this solution provide s the maximum fault-tolerance for the system, it en tails a Certkingdom significant cost to maintain a total of 18 instance s across 3 AZs. The option that says: 2 instances in eu-west-1a, 2 instances in eu-west-1b, and 2 instances in eu-west -1c is incorrect because if one Availability Zone goes down, there will only be 4 running instances availa ble. Although this is the most cost-effective solution, it does not provide fault-tolerance. The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and no instances in eu-wes t- 1c is incorrect because although it provides fault- tolerance, it is not the most cost-effective soluti on as compared with the options above. This solution has 12 running instances, unlike the correct answer whi ch only has 9 instances. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html 41 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf", "references": "" }, { "question": ": The company you are working for has a set of AWS re sources hosted in ap-northeast-1 region. You have been asked by your IT Manager to create an AWS CLI shell script that will call an AWS service which could create duplicate resources in another region in the event that ap-northeast-1 region fails. The duplicated resources should also contain the VPC Pe ering configuration and other networking components from the primary stack. Which of the following AWS services could help fulf ill this task?", "options": [ "A. AWS CloudFormation", "B. Amazon LightSail", "C. Amazon SNS", "D. Amazon SQS" ], "correct": "A. AWS CloudFormation", "explanation": "Explanation AWS CloudFormation is a service that helps you mode l and set up your Amazon Web Services resources so that you can spend less time managing those reso urces and more time focusing on your applications t hat run in AWS. Certkingdom You can create a template that describes all the AW S resources that you want (like Amazon EC2 instance s or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. With this, you can deploy an exact copy of your AWS architecture, along with all of the AWS resources which are hosted in one region to another. 42 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is AWS CloudFormation. Amazon LightSail is incorrect because you can't use this to duplicate your resources in your VPC. You have to use CloudFormation instead. Amazon SQS and Amazon SNS are both incorrect becaus e SNS and SQS are just messaging services. References: https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/Welcome.html https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/using-cfn-cli-creating-stack.html Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://youtube.com/watch?v=9Xpuprxg7aY", "references": "" }, { "question": ":A technology company is building a new cryptocurren cy trading platform that allows the buying and selling of Bitcoin, Ethereum, Ripple, Tether, and m any others. You were hired as a Cloud Engineer to b uild the required infrastructure needed for this new tra ding platform. On your first week at work, you star ted to create CloudFormation YAML scripts that define all of the needed AWS resources for the application. Your manager was shocked that you haven't created t he EC2 instances, S3 buckets, and other AWS Certkingdom resources straight away. He does not understand the text-based scripts that you have done and has aske d for your clarification. In this scenario, what are the benefits of using th e Amazon CloudFormation service that you should tel l your manager to clarify his concerns? (Select TWO.)", "options": [ "A. Enables modeling, provisioning, and version-co ntrolling of your entire AWS infrastructure", "B. Allows you to model your entire infrastructure in a text file", "C. A storage location for the code of your applic ation", "D. Provides highly durable and scalable data stor age" ], "correct": "", "explanation": "Explanation AWS CloudFormation provides a common language for y ou to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text fil e to model and provision, in an automated and secure man ner, all the resources needed for your applications across all regions and accounts. This file serves a s the single source of truth for your cloud environ ment. AWS CloudFormation is available at no additional ch arge, and you pay only for the AWS resources needed to run your applications. Hence, the correct answers are: - Enables modeling, provisioning, and version-contr olling of your entire AWS infrastructure - Allows you to model your entire infrastructure in a text file Certkingdom The option that says: Provides highly durable and s calable data storage is incorrect because CloudFormation is not a data storage service. The option that says: A storage location for the co de of your application is incorrect because CloudFormation is not used to store your applicatio n code. You have to use CodeCommit as a code repository and not CloudFormation. The option that says: Using CloudFormation itself i s free, including the AWS resources that have been created is incorrect because although the use of Cl oudFormation service is free, you have to pay the A WS resources that you created. References: https://aws.amazon.com/cloudformation/ 44 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/cloudformation/faqs/ Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/", "references": "" }, { "question": ": A data analytics company, which uses machine learni ng to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are i nstructed to implement a disaster recovery plan for their systems to ensure business continuity even in the e vent of an AWS region outage. Which of the following is the best approach to meet this requirement?", "options": [ "A. Enable Cross-Region Snapshots Copy in your Ama zon Redshift Cluster.", "B. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster", "C. Use Automated snapshots of your Redshift Clust er.", "D. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse" ], "correct": "A. Enable Cross-Region Snapshots Copy in your Ama zon Redshift Cluster.", "explanation": "Explanation You can configure Amazon Redshift to copy snapshots for a cluster to another region. To configure cros s- region snapshot copy, you need to enable this copy feature for each cluster and configure where to cop y Certkingdom snapshots and how long to keep copied automated sna pshots in the destination region. When cross-region copy is enabled for a cluster, all new manual and a utomatic snapshots are copied to the specified regi on. 45 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Create a scheduled job that w ill automatically take the snapshot of your Redshif t Cluster and store it to an S3 bucket. Restore the s napshot in case of an AWS region outage is incorrect because although this option is possible, this entails a lot of manual work and hence, not t he best option. You should configure cross-region snapshot copy instead. The option that says: Do nothing because Amazon Red shift is a highly available, fully-managed data warehouse which can withstand an outage of an entir e AWS region is incorrect because although Amazon Redshift is a fully-managed data warehouse, you will still need to configure cross-region snaps hot copy to ensure that your data is properly replicate d to another region. certkingdom. Using Automated snapshots of your Redshift Cluster is incorrect because using automated snapshots is not enough and will not be available in case the en tire AWS region is down.", "references": "https://docs.aws.amazon.com/redshift/latest/mgmt/ma naging-snapshots-console.html certkingdom. Amazon Redshift Overview: https://youtu.be/jlLERNzhHOg Check out this Amazon Redshift Cheat Sheet: https://tutorialsdojo.com/amazon-redshift/ certkingdom." }, { "question": ": A company has a distributed application in AWS that periodically processes large volumes of data acros s Certkingdom multiple instances. The Solutions Architect designe d the application to recover gracefully from any instance failures. He is then required to launch th e application in the most cost-effective way. certkingdom. Which type of EC2 instance will meet this requireme nt?", "options": [ "A. Dedicated instances", "B. Reserved instances", "C. Spot Instances", "D. On-Demand instances" ], "correct": "C. Spot Instances", "explanation": "Explanation 46 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You require an EC2 instance that is the most cost-e ffective among other types. In addition, the applic ation it will host is designed to gracefully recover in c ase of instance failures. certkingdom. certkingdom. In terms of cost-effectiveness, Spot and Reserved i nstances are the top options. And since the applica tion can gracefully recover from instance failures, the Spot instance is the best option for this case as i t is the cheapest type of EC2 instance. Remember that when y ou use Spot Instances, there will be interruptions. Amazon EC2 can interrupt your Spot Instance when the Spot price ex ceeds your maximum price, when the demand for Spot Instances rise, or when the supply of Spot Instance s decreases. certkingdom. Hence, the correct answer is: Spot Instances. Reserved instances is incorrect. Although you can a lso use reserved instances to save costs, it entail s a Certkingdom commitment of 1-year or 3-year terms of usage. Sinc e your processes only run periodically, you won't b e able to maximize the discounted price of using rese rved instances. Dedicated instances and On-Demand instances are als o incorrect because Dedicated and on-demand certkingdom. instances are not a cost-effective solution to use for your application.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ how-spot-instances-work.html Check out this Amazon EC2 Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ Here is an in-depth look at Spot Instances: 47 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://youtu.be/PKvss-RgSjI" }, { "question": ": A company plans to reduce the amount of data that A mazon S3 transfers to the servers in order to lower the operating costs as well as lower the latency of ret rieving the data. To accomplish this, you need to u se simple structured query language (SQL) statements t o filter the contents of Amazon S3 objects and retr ieve just the subset of data that you need. Which of the following services will help you accom plish this requirement? certkingdom.", "options": [ "A. S3 Select", "B. Redshift Spectrum", "C. RDS", "D. AWS Step Functions" ], "correct": "A. S3 Select", "explanation": "Explanation With Amazon S3 Select, you can use simple structure d query language (SQL) statements to filter the contents of Amazon S3 objects and retrieve just the subset of data that you need. By using Amazon S3 Select to filter this data, you can reduce the amou nt of data that Amazon S3 transfers, which reduces the cost and latency to retrieve this data. certkingdom. Certkingdom certkingdom. certkingdom. 48 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom Amazon S3 Select works on objects stored in CSV, JS ON, or Apache Parquet format. It also works with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only), and server-side encrypted objects. You can specify the format of th e results as either CSV or JSON, and you can determ ine how the records in the result are delimited. RDS is incorrect. Although RDS is an SQL database w here you can perform SQL operations, it is still no t valid because you want to apply SQL transactions on S3 itself, and not on the database, which RDS cann ot do. 49 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Redshift Spectrum is incorrect. Although Amazon Red shift Spectrum provides a similar in-query functionality like S3 Select, this service is more suitable for querying your data from the Redshift e xternal tables hosted in S3. The Redshift queries are run o n your cluster resources against local disk. Redshi ft Spectrum queries run using per-query scale-out reso urces against data in S3 which can entail additiona l costs compared with S3 Select. AWS Step Functions is incorrect because this only l ets you coordinate multiple AWS services into serverless workflows so you can build and update ap ps quickly. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/sel ecting-content-from-objects.html https://docs.aws.amazon.com/redshift/latest/dg/c-us ing-spectrum.html Check out these AWS Cheat Sheets: https://tutorialsdojo.com/amazon-s3/ https://tutorialsdojo.com/amazon-athena/ https://tutorialsdojo.com/amazon-redshift/", "references": "" }, { "question": ": A company plans to migrate a NoSQL database to an E C2 instance. The database is configured to replicat e the data automatically to keep multiple copies of d ata for redundancy. The Solutions Architect needs t o launch an instance that has a high IOPS and sequent ial read/write access. Certkingdom Which of the following options fulfills the require ment if I/O throughput is the highest priority?", "options": [ "A. Use General purpose instances with EBS volume.", "B. Use Memory optimized instances with EBS volume .", "C. Use Storage optimized instances with instance store volume.", "D. Use Compute optimized instance with instance s tore volume." ], "correct": "C. Use Storage optimized instances with instance store volume.", "explanation": "Explanation Amazon EC2 provides a wide selection of instance ty pes optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the 50 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam flexibility to choose the appropriate mix of resour ces for your applications. Each instance type inclu des one or more instance sizes, allowing you to scale your resources to the requirements of your target worklo ad. Certkingdom A storage optimized instance is designed for worklo ads that require high, sequential read and write ac cess to very large data sets on local storage. They are optimized to deliver tens of thousands of low-laten cy, random I/O operations per second (IOPS) to applicat ions. Some instance types can drive more I/O throughput than what you can provision for a single EBS volume. You can join multiple volumes together in a RAID 0 configuration to use the available band width for these instances. Based on the given scenario, the NoSQL database wil l be migrated to an EC2 instance. The suitable instance type for NoSQL database is I3 and I3en ins tances. Also, the primary data storage for I3 and I 3en 51 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam instances is non-volatile memory express (NVMe) SSD instance store volumes. Since the data is replicat ed automatically, there will be no problem using an in stance store volume. Hence, the correct answer is: Use Storage optimized instances with instance store volume. The option that says: Use Compute optimized instanc es with instance store volume is incorrect because this type of instance is ideal for compute-bound ap plications that benefit from high-performance proce ssors. It is not suitable for a NoSQL database. The option that says: Use General purpose instances with EBS volume is incorrect because this instance only provides a balance of computing, memory, and n etworking resources. Take note that the requirement in the scenario is high sequential read and write a ccess. Therefore, you must use a storage optimized instance. The option that says: Use Memory optimized instance s with EBS volume is incorrect. Although this type of instance is suitable for a NoSQL database, it is not designed for workloads that require high, sequ ential read and write access to very large data sets on lo cal storage. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /storage-optimized-instances.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-types.html Amazon EC2 Overview: https://youtube.com/watch?v=7VsGIHT_jQE Certkingdom Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A company needs to implement a solution that will p rocess real-time streaming data of its users across the globe. This will enable them to track and analyze g lobally-distributed user activity on their website and mobile applications, including clickstream analysis . The solution should process the data in close geographical proximity to their users and respond t o user requests at low latencies. Which of the following is the most suitable solutio n for this scenario?", "options": [ "A. Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in", "B. Integrate CloudFront with Lambda@Edge in order to process the data in close", "C. Integrate CloudFront with Lambda@Edge in order to process the data in close", "D. Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in" ], "correct": "C. Integrate CloudFront with Lambda@Edge in order to process the data in close", "explanation": "Explanation Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your applicati on, which improves performance and reduces latency. Wit h Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda@Edge, you can enrich your web applicati ons by making them globally distributed and improving their performance -- all with zero server administration. Lambda@Edge runs your code in response to events generated by the Amazon CloudFro nt content delivery network (CDN). Just upload your code to AWS Lambda, which takes care of everything required to run and scale your code with high Certkingdom availability at an AWS location closest to your end user. 53 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam By using Lambda@Edge and Kinesis together, you can process real-time streaming data so that you can track and analyze globally-distributed user activit y on your website and mobile applications, includin g clickstream analysis. Hence, the correct answer in this scenario is the option that says: Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies . Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucke t. The options that say: Use a CloudFront web distribu tion and Route 53 with a latency-based routing policy, in order to process the data in close geogr aphical proximity to users and respond to user requests at low latencies. Process real-time stream ing data using Kinesis and durably store the Certkingdom results to an Amazon S3 bucket and Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket are both i ncorrect because you can only route traffic using Route 53 since it does not have any computing capab ility. This solution would not be able to process a nd return the data in close geographical proximity to your users since it is not using Lambda@Edge. The option that says: Integrate CloudFront with Lam bda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Amazon Athena and durably stor e the results to an Amazon S3 bucket is incorrect because although using Lambda@Edge is cor rect, Amazon Athena is just an interactive query service that enables you to easily analyze data in Amazon S3 using standard SQL. Kinesis should be use d to process the streaming data in real-time. References: 54 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/lambda/edge/ https://aws.amazon.com/blogs/networking-and-content -delivery/global-data-ingestion-with-amazon- cloudfront-and-lambdaedge/", "references": "" }, { "question": ": A company is using an On-Demand EC2 instance to hos t a legacy web application that uses an Amazon Instance Store-Backed AMI. The web application shou ld be decommissioned as soon as possible and hence,you need to terminate the EC2 instance. When the instance is terminated, what happens to th e data on the root volume?", "options": [ "A. Data is automatically saved as an EBS snapshot .", "B. Data is unavailable until the instance is rest arted.", "C. Data is automatically deleted.", "D. Data is automatically saved as an EBS volume." ], "correct": "C. Data is automatically deleted.", "explanation": "Explanation AMIs are categorized as either backed by Amazon EBS or backed by instance store. The former means that the root device for an instance launched from the A MI is an Amazon EBS volume created from an Amazon EBS snapshot. The latter means that the root device for an instance launched from the AMI is an instan ce store volume created from a template stored in Amaz on S3. Certkingdom 55 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The data on instance store volumes persist only dur ing the life of the instance which means that if th e instance is terminated, the data will be automatica lly deleted. Hence, the correct answer is: Data is automatically deleted. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ComponentsAMIs.html Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": A company launched a global news website that is de ployed to AWS and is using MySQL RDS. The website has millions of viewers from all over the w orld which means that the website has read-heavy database workloads. All database transactions must be ACID compliant to ensure data integrity. In this scenario, which of the following is the bes t option to use to increase the read throughput on the MySQL database?", "options": [ "A. Enable Amazon RDS Read Replicas", "B. Use SQS to queue up the requests", "C. Enable Amazon RDS Standby Replicas", "D. Enable Multi-AZ deployments" ], "correct": "A. Enable Amazon RDS Read Replicas", "explanation": "Explanation Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create o ne or more replicas of a given source DB Instance a nd serve high-volume application read traffic from mul tiple copies of your data, thereby increasing aggre gate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL , MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora. 56 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom Enabling Multi-AZ deployments is incorrect because the Multi-AZ deployments feature is mainly used to achieve high availability and failover support for your database. Enabling Amazon RDS Standby Replicas is incorrect b ecause a Standby replica is used in Multi-AZ deployments and hence, it is not a solution to redu ce read-heavy database workloads. Using SQS to queue up the requests is incorrect. Al though an SQS queue can effectively manage the requests, it won't be able to entirely improve the read-throughput of the database by itself. References: https://aws.amazon.com/rds/details/read-replicas/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_ReadRepl.html Amazon RDS Overview: 57 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A company is using the AWS Directory Service to int egrate their on-premises Microsoft Active Directory (AD) domain with their Amazon EC2 instances via an AD connector. The below identity-based policy is attached to the IAM Identities that use the AWS Dir ectory service: 1. 1. { 2. 2. \"Version\":\"2012-10-17\", 3. 3. \"Statement\":[ 4. 4. { 5. 5. \"Sid\":\"DirectoryTutorialsDojo1234\", 6. 6. \"Effect\":\"Allow\", 7. 7. \"Action\":[ 8. 8. \"ds:*\" 9. 9. ], 10.10. \"Resource\":\"arn:aws:ds:us-east-1:98765432101 2:directory/d-1234567890\" 11.11. }, 12.12. { 13.13. \"Effect\":\"Allow\", 14.14. \"Action\":[ 15.15. \"ec2:*\" Certkingdom 16.16. ], 17.17. \"Resource\":\"*\" 18.18. } 19.19. ] 20.20. }", "options": [ "A. Allows all AWS Directory Service (ds) calls as long as the resource contains the", "B. Allows all AWS Directory Service (ds) calls as long as the resource contains the", "C. Allows all AWS Directory Service (ds) calls as long as the resource contains the", "D. Allows all AWS Directory Service (ds) calls as long as the resource contains the" ], "correct": "D. Allows all AWS Directory Service (ds) calls as long as the resource contains the", "explanation": "Explanation AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices , and administrators use them to manage access to inf ormation and resources. AWS Directory Service provides multiple directory choices for customers w ho want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, gr oups, devices, and access. Every AWS resource is owned by an AWS account, and permissions to create or access the resources are governed by permissions policies. An account admini strator can attach permissions policies to IAM identities (that is, users, groups, and roles), and some services (such as AWS Lambda) also support attaching permissions policies to resources. Certkingdom The following resource policy example allows all ds calls as long as the resource contains the directo ry ID \"d-1234567890\". { \"Version\":\"2012-10-17\", \"Statement\":[ { \"Sid\":\"VisualEditor0\", \"Effect\":\"Allow\", 59 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam \"Action\":[ \"ds:*\" ], \"Resource\":\"arn:aws:ds:us-east-1:123456789012:direc tory/d-1234567890\" }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:*\" ], \"Resource\":\"*\" }] } Certkingdom Hence, the correct answer is the option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-123456789 0. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory ID: DirectoryTutorialsDojo1234 is incorre ct because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory ID. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory ID: 987654321012 is incorrect because the numbers: 987654321012 is the Account ID and not the Directory ID. The option that says: Allows all AWS Directory Serv ice (ds) calls as long as the resource contains the directory name of: DirectoryTutorialsDojo1234 is in correct because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory name. 60 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/directoryservice/latest /admin-guide/IAM_Auth_Access_IdentityBased.html https://docs.aws.amazon.com/directoryservice/latest /admin-guide/IAM_Auth_Access_Overview.html AWS Identity Services Overview: https://youtube.com/watch?v=AIdUw0i8rr0 Check out this AWS Identity & Access Management (IA M) Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company recently launched an e-commerce applicati on that is running in eu-east-2 region, which stric tly requires six EC2 instances running at all times. In that region, there are 3 Availability Zones (AZ) t hat you can use - eu-east-2a, eu-east-2b, and eu-east-2c. Which of the following deployments provide 100% fau lt tolerance if any single AZ in the region becomes unavailable? (Select TWO.)", "options": [ "A. eu-east-2a with four EC2 instances, eu-east-2b with two EC2 instances, and eu-east-2c with", "B. eu-east-2a with two EC2 instances, eu-east-2b with four EC2 instances, and eu-east-2c with", "C. eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no", "D. eu-east-2a with three EC2 instances, eu-east-2 b with three EC2 instances, and eu-east-2c" ], "correct": "", "explanation": "Explanation Fault Tolerance is the ability of a system to remai n in operation even if some of the components used to build the system fail. In AWS, this means that in t he event of server fault or system failures, the nu mber of running EC2 instances should not fall below the min imum number of instances required by the system for 61 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam it to work properly. So if the application requires a minimum of 6 instances, there should be at least 6 instances running in case there is an outage in one of the Availability Zones or if there are server i ssues. Certkingdom 62 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In this scenario, you have to simulate a situation where one Availability Zone became unavailable for each option and check whether it still has 6 running ins tances. Hence, the correct answers are: eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2 instances and eu-east-2a with three EC2 instances, eu-east-2b with three EC2 instances, and eu-east-2c with three EC2 instances because eve n if one of the availability zones were to go down, there would still be 6 active instances.", "references": "https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf" }, { "question": ": A newly hired Solutions Architect is checking all o f the security groups and network access control li st rules of the company's AWS resources. For security purposes, the MS SQL connection via port 1433 of th e database tier should be secured. Below is the secur ity group configuration of their Microsoft SQL Serv er database: Certkingdom The application tier hosted in an Auto Scaling grou p of EC2 instances is the only identified resource that needs to connect to the database. The Architect sho uld ensure that the architecture complies with the best practice of granting least privilege. Which of the following changes should be made to th e security group configuration?", "options": [ "A. For the MS SQL rule, change the Source to the Network ACL ID attached to the", "B. For the MS SQL rule, change the Source to the security group ID attached to the", "C. For the MS SQL rule, change the Source to the EC2 instance IDs of the underlying", "D. For the MS SQL rule, change the Source to the static AnyCast IP address attached to the" ], "correct": "B. For the MS SQL rule, change the Source to the security group ID attached to the", "explanation": "Explanation A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC can be assigned to a different set of security groups. If you launch an instance using the Amazon EC2 API or a command line tool and you don't specify a security group, the instance is automatically assig ned to the default security group for the VPC. If y ou launch an instance using the Amazon EC2 console, yo u have an option to create a new security group for the instance. Certkingdom For each security group, you add rules that control the inbound traffic to instances, and a separate s et of rules that control the outbound traffic. This secti on describes the basic things that you need to know about security groups for your VPC and their rules. Amazon security groups and network ACLs don't filte r traffic to or from link-local addresses (169.254.0.0/16) or AWS reserved IPv4 addresses (th ese are the first four IPv4 addresses of the subnet , 64 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam including the Amazon DNS server address for the VPC ). Similarly, flow logs do not capture IP traffic t o or from these addresses. In the scenario, the security group configuration a llows any server (0.0.0.0/0) from anywhere to estab lish an MS SQL connection to the database via the 1433 p ort. The most suitable solution here is to change t he Source field to the security group ID attached to t he application tier. Hence, the correct answer is the option that says: For the MS SQL rule, change the Source to the security group ID attached to the application tier. The option that says: For the MS SQL rule, change t he Source to the EC2 instance IDs of the underlying instances of the Auto Scaling group is i ncorrect because using the EC2 instance IDs of the underlying instances of the Auto Scaling group as t he source can cause intermittent issues. New instan ces will be added and old instances will be removed fro m the Auto Scaling group over time, which means tha t you have to manually update the security group sett ing once again. A better solution is to use the sec urity group ID of the Auto Scaling group of EC2 instances . The option that says: For the MS SQL rule, change t he Source to the static AnyCast IP address attached to the application tier is incorrect becau se a static AnyCast IP address is primarily used fo r AWS Global Accelerator and not for security group c onfigurations. The option that says: For the MS SQL rule, change t he Source to the Network ACL ID attached to the application tier is incorrect because you have to u se the security group ID instead of the Network ACL ID of the application tier. Take note that the Network ACL covers the entire subnet which means that othe r applications that use the same subnet will also be affected. References: Certkingdom https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html", "references": "" }, { "question": ": A company is building an internal application that processes loans, accruals, and interest rates for t heir clients. They require a storage service that is abl e to handle future increases in storage capacity of up to 16 TB and can provide the lowest-latency access to the ir data. The web application will be hosted in a si ngle m5ad.24xlarge Reserved EC2 instance that will proce ss and store data to the storage service. Which of the following storage services would you r ecommend?", "options": [ "A. EFS", "B. Storage Gateway", "C. EBS", "D. S3" ], "correct": "C. EBS", "explanation": "Explanation Amazon Web Services (AWS) offers cloud storage serv ices to support a wide range of storage workloads such as Amazon S3, EFS and EBS. Amazon EFS is a fil e storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file s ystem access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon S3 is an object storage service. Amazon S3 makes da ta available through an Internet API that can be accessed anywhere. Amazon EBS is a block-level stor age service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. You can also increase EBS storage for up to 16TB or add new volumes for additional storage. In this scenario, the company is looking for a stor age service which can provide the lowest-latency ac cess to their data which will be fetched by a single m5a d.24xlarge Reserved EC2 instance. This type of workloads can be supported better by using either E FS or EBS but in this case, the latter is the most suitable storage service. As mentioned above, EBS p rovides the lowest-latency access to the data for y our EC2 instance since the volume is directly attached to the instance. In addition, the scenario does not require concurrently-accessible storage since they only hav e one instance. Hence, the correct answer is EBS. Certkingdom 66 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Storage Gateway is incorrect since this is primaril y used to extend your on-premises storage to your A WS Certkingdom Cloud. S3 is incorrect because although this is also highl y available and highly scalable, it still does not provide the lowest-latency access to the data, unlike EBS. Remember that S3 does not reside within your VPC by default, which means the data will traverse the pub lic Internet that may result to higher latency. You can set up a VPC Endpoint for S3 yet still, its latency is greater than that of EBS. EFS is incorrect because the scenario does not requ ire concurrently-accessible storage since the inter nal application is only hosted in one instance. Althoug h EFS can provide low latency data access to the EC 2 instance as compared with S3, the storage service t hat can provide the lowest latency access is still EBS. References: https://aws.amazon.com/ebs/ https://aws.amazon.com/efs/faq/ 67 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": A company has a set of Linux servers running on mul tiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log fi les generated from these servers for their report. Which of the following services is best to use in t his case? Amazon S3 Glacier Deep Archive for storing the appl ication log files and AWS ParallelCluster for processing the log files. Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files. A single On-Demand Amazon EC2 instance for both sto ring and processing the log files Amazon S3 Glacier for storing the application log f iles and Spot EC2 Instances for processing them.", "options": [ "A.", "B.", "C.", "D." ], "correct": "", "explanation": "Explanation Amazon EMR is a managed cluster platform that simpl ifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and anal yze vast amounts of data. By using these Certkingdom frameworks and related open-source projects such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence wo rkloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and o ut of other AWS data stores and databases such as Amazon Simple Storage Service (Amazon S3) and Amazo n DynamoDB. Hence, the correct answer is: Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files. 68 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them is incorrect as Amazon S3 Glaci er is used for data archive only. The option that says: A single On-Demand Amazon EC2 instance for both storing and processing the log files is incorrect as an EC2 instance is not a recommended storage service. In addition, Amazon EC 2 does not have a built-in data processing engine to process large amounts of data. The option that says: Amazon S3 Glacier Deep Archiv e for storing the application log files and AWS ParallelCluster for processing the log files is inc orrect because the long retrieval time of Amazon S3 Glacier Deep Archive makes this option unsuitable. Moreover, AWS ParallelCluster is just an AWS- supported open-source cluster management tool that makes it easy for you to deploy and manage High- Performance Computing (HPC) clusters on AWS. ParallelCluster uses a simpl e text file to model and provision all the resource s needed for your HPC applications in an automated an d secure manner. References: http://docs.aws.amazon.com/emr/latest/ManagementGui de/emr-what-is-emr.html https://aws.amazon.com/hpc/parallelcluster/ Check out this Amazon EMR Cheat Sheet: https://tutorialsdojo.com/amazon-emr/", "references": "" }, { "question": ": Certkingdom A startup launched a new FTP server using an On-Dem and EC2 instance in a newly created VPC with default settings. The server should not be accessib le publicly but only through the IP address 175.45.116.100 and nowhere else. Which of the following is the most suitable way to implement this requirement?", "options": [ "A. Create a new inbound rule in the security grou p of the EC2 instance with the following", "B. Create a new Network ACL inbound rule in the subn et of the EC2 instance with the following", "C. Create a new Network ACL inbound rule in the subn et of the EC2 instance with the following", "D. Create a new inbound rule in the security group o f the EC2 instance with the following details:" ], "correct": "A. Create a new inbound rule in the security grou p of the EC2 instance with the following", "explanation": "Explanation The FTP protocol uses TCP via ports 20 and 21. This should be configured in your security groups or inyour Network ACL inbound rules. As required by the scenario, you should only allow the individual IP o f the client and not the entire network. Therefore, i n the Source, the proper CIDR notation should be us ed. The /32 denotes one IP address and the /0 refers to the entire network. It is stated in the scenario that you launched the EC2 instances in a newly created VPC with default s ettings. Your VPC automatically comes with a modifiable defa ult network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. Hence, you actually don't need to explicit ly add inbound rules to your Network ACL to allow inbound traffic, if your VPC has a default setting. 70 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom The below option is incorrect: Create a new inbound rule in the security group of the EC2 instance with the following details: Protocol: UDP Port Range: 20 - 21 Source: 175.45.116.100/32 Although the configuration of the Security Group is valid, the provided Protocol is incorrect. Take no te that FTP uses TCP and not UDP. The below option is also incorrect: 71 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: Protocol: TCP Port Range: 20 - 21 Source: 175.45.116.100/0 Allow/Deny: ALLOW Although setting up an inbound Network ACL is valid , the source is invalid since it must be an IPv4 or IPv6 CIDR block. In the provided IP address, the /0 refers to the entire network and not a specific IP address. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL allows all traffic. This m eans that there is actually no need to configure yo ur Network ACL. Likewise, the below option is also incorrect: Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: Protocol: UDP Port Range: 20 - 21 Source: 175.45.116.100/0 Allow/Deny: ALLOW Certkingdom Just like in the above, the source is also invalid. Take note that FTP uses TCP and not UDP, which is one of the reasons why this option is wrong. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL al lows all traffic. This means that there is actually no need to configure your Network ACL. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-network-acls.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ 72 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A Solutions Architect is designing a setup for a da tabase that will run on Amazon RDS for MySQL. He needs to ensure that the database can automatically failover to an RDS instance to continue operating in the event of failure. The architecture should also be a s highly available as possible. Which among the following actions should the Soluti ons Architect do?", "options": [ "A. Create five cross-region read replicas in each region. In the event of an Availability Zone", "B. Create five read replicas across different ava ilability zones. In the event of an Availability", "C. Create a standby replica in another availabili ty zone by enabling Multi-AZ deployment.", "D. Create a read replica in the same region where the DB instance resides. In addition, create a read replica in a different region to survive a reg ion's failure. In the event of an Availability" ], "correct": "C. Create a standby replica in another availabili ty zone by enabling Multi-AZ deployment.", "explanation": "Explanation You can run an Amazon RDS DB instance in several AZ s with Multi-AZ deployment. Amazon automatically provisions and maintains a secondary standby DB instance in a different AZ. Your primary DB instance is synchronously replicated across AZs to the secondary instance to provide data redundanc y, failover support, eliminate I/O freezes, and minimi ze latency spikes during systems backup. Certkingdom 73 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam As described in the scenario, the architecture must meet two requirements: The database should automatically failover to an RD S instance in case of failures. The architecture should be as highly available as p ossible. Hence, the correct answer is: Create a standby repl ica in another availability zone by enabling Multi- AZ deployment because it meets both of the requirem ents. Certkingdom The option that says: Create a read replica in the same region where the DB instance resides. In addition, create a read replica in a different regi on to survive a region's failure. In the event of a n Availability Zone outage, promote any replica to be come the primary instance is incorrect. Although this architecture provides higher availability since it can survive a region failure, it still does not meet the first requirement since the process is not automated. The architecture should also supp ort automatic failover to an RDS instance in case o f failures. Both the following options are incorrect: - Create five read replicas across different availa bility zones. In the event of an Availability Zone outage, promote any replica to become the primary i nstance 74 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - Create five cross-region read replicas in each re gion. In the event of an Availability Zone outage, promote any replica to become the primary instance Although it is possible to achieve high availabilit y with these architectures by promoting a read repl ica into the primary instance in an event of failure, it doe s not support automatic failover to an RDS instance which is also a requirement in the problem. References: https://aws.amazon.com/rds/features/multi-az/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A large multinational investment bank has a web app lication that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tole rance of this system. Which of the following is the best option?", "options": [ "A. Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an", "B. Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an", "C. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an", "D. Deploy an Auto Scaling group with 1 instance i n each of 4 Availability Zones behind an" ], "correct": "C. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an", "explanation": "Explanation Fault Tolerance is the ability of a system to remai n in operation even if some of the components used to build the system fail. In AWS, this means that in t he event of server fault or system failures, the nu mber of running EC2 instances should not fall below the min imum number of instances required by the system for 75 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam it to work properly. So if the application requires a minimum of 4 instances, there should be at least 4 instances running in case there is an outage in one of the Availability Zones or if there are server i ssues. One of the differences between Fault Tolerance and High Availability is that the former refers to the minimum number of running instances. For example, y ou have a system that requires a minimum of 4 running instances and currently has 6 running insta nces deployed in two Availability Zones. There was a Certkingdom component failure in one of the Availability Zones which knocks out 3 instances. In this case, the sys tem can still be regarded as Highly Available since the re are still instances running that can accommodate the requests. However, it is not Fault-Tolerant since t he required minimum of four instances has not been met. Hence, the correct answer is: Deploy an Auto Scalin g group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer. The option that says: Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer is incorrect be cause if one Availability Zone went out, there will only be 2 running instances available out of the re quired 4 minimum instances. Although the Auto Scali ng group can spin up another 2 instances, the fault to lerance of the web application has already been compromised. 76 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer is incorrect because i f the Availability Zone went out, there will be no running instance available to accommodate the reque st. The option that says: Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer is incorrect be cause if one Availability Zone went out, there will only be 3 instances available to accommodate the re quest. References: https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf https://d1.awsstatic.com/whitepapers/aws-building-f ault-tolerant-applications.pdf AWS Overview Cheat Sheets: https://tutorialsdojo.com/aws-cheat-sheets-overview / Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have mor e than a hundred EC2 instances running in production, and each must have a logging function t hat collects various system details regarding that instance. The SysOps team will periodically review these logs and analyze their contents using AWS Certkingdom Analytics tools, and the result will need to be ret ained in an S3 bucket. In this scenario, what is the most efficient way to collect and analyze logs from the instances with m inimal effort?", "options": [ "A. Install AWS Inspector Agent in each instance w hich will collect and push data to", "B. Install AWS SDK in each instance and create a custom daemon script that would collect and", "C. Install the AWS Systems Manager Agent (SSM Age nt) in each instance which will", "D. Install the unified CloudWatch Logs agent in e ach instance which will automatically collect" ], "correct": "D. Install the unified CloudWatch Logs agent in e ach instance which will automatically collect", "explanation": "Explanation To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the unified CloudWatch agent which has the foll owing advantages: - You can collect both logs and advanced metrics wi th the installation and configuration of just one a gent. - The unified agent enables the collection of logs from servers running Windows Server. - If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collect ion of additional system metrics, for in-guest visibility. - The unified agent provides better performance. Certkingdom CloudWatch Logs Insights enables you to interactive ly search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help yo u quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and vali date deployed fixes. CloudWatch Logs Insights includes a purpose-built q uery language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included f or several types of AWS service logs. The option that says: Install AWS SDK in each insta nce and create a custom daemon script that would collect and push data to CloudWatch Logs periodical ly. Enable CloudWatch detailed monitoring and 78 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam use CloudWatch Logs Insights to analyze the log dat a of all instances is incorrect. Although this is a valid solution, this entails a lot of effort to imp lement as you have to allocate time to install the AWS SDK to each instance and develop a custom monitoring so lution. Remember that the question is specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and not cost-efficient to enable detailed monitoring in Clo udWatch in order to meet the requirements of this scenario since this can be done using CloudWatch Lo gs. The option that says: Install the AWS Systems Manag er Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWa tch Logs. Analyze the log data with CloudWatch Logs Insights is incorrect. Although this is also a valid solution, it is more efficient to use CloudW atch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an issu e with SSM Agent is time-consuming hence, for more ef ficient instance monitoring, you can use the CloudWatch Agent instead to send the log data to Am azon CloudWatch Logs. The option that says: Install AWS Inspector Agent i n each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch d ashboard to properly analyze the log data of all instances is incorrect because AWS Inspector is simply a security assessments service which only h elps you in checking for unintended network accessibilit y of your EC2 instances and for vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon Cl oudWatch dashboard is not suitable since its primarily used for scenarios where you have to moni tor your resources in a single view, even those resources that are spread across different AWS Regi ons. It is better to use CloudWatch Logs Insights instead since it enables you to interactively searc h and analyze your log data. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/WhatIsCloudWatchLogs.html Certkingdom https://docs.aws.amazon.com/systems-manager/latest/ userguide/monitoring-ssm-agent.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/AnalyzingLogData.html Amazon CloudWatch Overview: https://youtube.com/watch?v=q0DmxfyGkeU Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ CloudWatch Agent vs SSM Agent vs Custom Daemon Scri pts: https://tutorialsdojo.com/cloudwatch-agent-vs-ssm-a gent-vs-custom-daemon-scripts/ 79 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company is using an Auto Scaling group which is c onfigured to launch new t2.micro EC2 instances when there is a significant load increase in the ap plication. To cope with the demand, you now need to replace those instances with a larger t2.2xlarge in stance type. How would you implement this change?", "options": [ "A. Change the instance type of each EC2 instance manually.", "B. Create a new launch configuration with the new instance type and update the Auto Scaling Group.", "C. Just change the instance type to t2.2xlarge in the current launch configuration", "D. Create another Auto Scaling Group and attach t he new instance type." ], "correct": "B. Create a new launch configuration with the new instance type and update the Auto Scaling Group.", "explanation": "Explanation You can only specify one launch configuration for a n Auto Scaling group at a time, and you can't modif y a launch configuration after you've created it. There fore, if you want to change the launch configuratio n for an Auto Scaling group, you must create a launch con figuration and then update your Auto Scaling group with the new launch configuration. Certkingdom 80 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is: Create a new launch c onfiguration with the new instance type and update the Auto Scaling Group. The option that says: Just change the instance type to t2.2xlarge in the current launch configuration is incorrect because you can't change your launch conf iguration once it is created. You have to create a new one instead. The option that says: Create another A uto Scaling Group and attach the new instance type Certkingdom is incorrect because you can't directly attach or d eclare the new instance type to your Auto Scaling g roup. You have to create a new launch configuration first , with a new instance type, then attach it to your existing Auto Scaling group. The option that says: Change th e instance type of each EC2 instance manually is incorrect because you can't directly change the ins tance type of your EC2 instance. This should be don e by creating a brand new launch configuration then atta ching it to your existing Auto Scaling group. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/LaunchConfiguration.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/create-asg.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/ 81 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company has a two-tier environment in its on-prem ises data center which is composed of an applicatio n tier and database tier. You are instructed to migra te their environment to the AWS cloud, and to desig n the subnets in their VPC with the following requirement s: 1. There is an application load balancer that would distribute the incoming traffic among the servers in the application tier. 2. The application tier and the d atabase tier must not be accessible from the public Internet. The application tier should only accept traffic com ing from the load balancer. 3. The database tier co ntains very sensitive data. It must not share the same sub net with other AWS resources and its custom route t able with other instances in the environment. 4. The env ironment must be highly available and scalable to handle a surge of incoming traffic over the Interne t. How many subnets should you create to meet the abov e requirements?", "options": [ "A. 4", "B. 6", "C. 3", "D. 2" ], "correct": "B. 6", "explanation": "Explanation The given scenario indicated 4 requirements that sh ould be met in order to successfully migrate their two- tier environment from their on-premises data center to AWS Cloud. The first requirement means that you have to use an application load balancer (ALB) to d istribute the incoming traffic to your application servers. Certkingdom The second requirement specifies that both your app lication and database tier should not be accessible from the public Internet. This means that you could crea te a single private subnet for both of your applica tion and database tier. However, the third requirement m entioned that the database tier should not share th e same subnet with other AWS resources to protect its sensitive data. This means that you should provisi on one private subnet for your application tier and an other private subnet for your database tier. The last requirement alludes to the need for using at least two Availability Zones to achieve high availability. This means that you have to distribut e your application servers to two AZs as well as yo ur database which can be set up with a master-slave co nfiguration to properly replicate the data between two zones. If you have more than one private subnet in the sam e Availability Zone that contains instances that ne ed to be registered with the load balancer, you only need to create one public subnet. You need only one pub lic subnet per Availability Zone; you can add the priva te instances in all the private subnets that reside in that particular Availability Zone. 82 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Since you have a public internet-facing load balanc er that has a group of backend Amazon EC2 instances that are deployed in a private subnet, you must cre ate the corresponding public subnets in the same Availability Zones. This new public subnet is on to p of the private subnet that is used by your privat e EC2 instances. Lastly, you should associate these publi c subnets to the Internet-facing load balancer to c omplete the setup. To summarize, we need to have one private subnet fo r the application tier and another one for the data base Certkingdom tier. We then need to create another public subnet in the same Availability Zone where the private EC2 instances are hosted, in order to properly connect the public Internet-facing load balancer to your in stances. This means that we have to use a total of 3 subnets consisting of 2 private subnets and 1 public subne t. To meet the requirement of high availability, we ha ve to deploy the stack to two Availability Zones. T his means that you have to double the number of subnets you are using. Take note as well that you must cre ate the corresponding public subnet in the same Availab ility Zone of your private EC2 servers in order for it to properly communicate with the load balancer. Hence, the correct answer is 6 subnets. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Scenario2.html 83 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/premiumsupport/knowledge-cen ter/public-load-balancer-private-ec2/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A financial firm is designing an application archit ecture for its online trading platform that must ha ve high availability and fault tolerance. Their Solutions A rchitect configured the application to use an Amazo n S3 bucket located in the us-east-1 region to store lar ge amounts of intraday financial data. The stored f inancial data in the bucket must not be affected even if the re is an outage in one of the Availability Zones or if there's a regional service failure. What should the Architect do to avoid any costly se rvice disruptions and ensure data durability?", "options": [ "A. Create a Lifecycle Policy to regularly backup the S3 bucket to Amazon Glacier.", "B. Copy the S3 bucket to an EBS-backed EC2 instan ce.", "C. Create a new S3 bucket in another region and c onfigure Cross-Account Access to the bucket", "D. Enable Cross-Region Replication." ], "correct": "D. Enable Cross-Region Replication.", "explanation": "Explanation In this scenario, you need to enable Cross-Region R eplication to ensure that your S3 bucket would not be Certkingdom affected even if there is an outage in one of the A vailability Zones or a regional service failure in us-east-1. When you upload your data in S3, your objects are r edundantly stored on multiple devices across multip le facilities within the region only, where you create d the bucket. Thus, if there is an outage on the en tire region, your S3 bucket will be unavailable if you d o not enable Cross-Region Replication, which should make your data available to another region. 84 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. Note that an Availability Zone (AZ) is more related with Amazon EC2 instances rather than Amazon S3 so if there is any outage in the AZ, the S3 bucket is usually not affected but only the EC2 instances dep loyed on that zone. certkingdom. Hence, the correct answer is: Enable Cross-Region R eplication. The option that says: Copy the S3 bucket to an EBS- backed EC2 instance is incorrect because EBS is not as durable as Amazon S3. Moreover, if the Availabil ity Zone where the volume is hosted goes down then the data will also be inaccessible. certkingdom. The option that says: Create a Lifecycle Policy to regularly backup the S3 bucket to Amazon Glacier is incorrect because Glacier is primarily used for dat a archival. You also need to replicate your data to another region for better durability. The option that says: Create a new S3 bucket in ano ther region and configure Cross-Account Access to Certkingdom the bucket located in us-east-1 is incorrect becaus e Cross-Account Access in Amazon S3 is primarily us ed if you want to grant access to your objects to anot her AWS account, and not just to another AWS Region . certkingdom. For example, Account MANILA can grant another AWS a ccount (Account CEBU) permission to access its resources such as buckets and objects. S3 Cross-Acc ount Access does not replicate data from one region to another. A better solution is to enable Cross-Regio n Replication (CRR) instead. References: https://aws.amazon.com/s3/faqs/ certkingdom. https://aws.amazon.com/s3/features/replication/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ 85 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A Solutions Architect is designing a monitoring app lication which generates audit logs of all operatio nal activities of the company's cloud infrastructure. T heir IT Security and Compliance team mandates that the application retain the logs for 5 years before the data can be deleted. How can the Architect meet the above requirement?", "options": [ "A. Store the audit logs in a Glacier vault and us e the Vault Lock feature.", "B. Store the audit logs in an Amazon S3 bucket an d enable Multi-Factor Authentication Delete", "C. Store the audit logs in an EBS volume and then take EBS snapshots every month.", "D. Store the audit logs in an EFS volume and use Network File System version 4 (NFSv4) file-" ], "correct": "A. Store the audit logs in a Glacier vault and us e the Vault Lock feature.", "explanation": "Explanation An Amazon S3 Glacier (Glacier) vault can have one r esource-based vault access policy and one Vault Lock policy attached to it. A Vault Lock policy is a vault access policy that you can lock. Using a Va ult Lock policy can help you enforce regulatory and com pliance requirements. Amazon S3 Glacier provides a set of API operations for you to manage the Vault L ock policies. certkingdom. Certkingdom certkingdom. certkingdom. 86 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam As an example of a Vault Lock policy, suppose that you are required to retain archives for one year be fore you can delete them. To implement this requirement, you can create a Vault Lock policy that denies use rs permissions to delete an archive until the archive has existed for one year. You can test this policy before locking it down. After you lock the policy, the pol icy becomes immutable. For more information about t he locking process, see Amazon S3 Glacier Vault Lock. If you want to manage other user permissions that c an be changed, you can use the vault access policy Amazon S3 Glacier supports the following archive op erations: Upload, Download, and Delete. Archives are immutable and cannot be modified. Hence, the co rrect answer is to store the audit logs in a Glacie r vault and use the Vault Lock feature. certkingdom. Storing the audit logs in an EBS volume and then ta king EBS snapshots every month is incorrect because this is not a suitable and secure solution. Anyone who has access to the EBS Volume can simply delete and modify the audit logs. Snapshots can be deleted too. Storing the audit logs in an Amazon S3 bucket and e nabling Multi-Factor Authentication Delete (MFA Delete) on the S3 bucket is incorrect because this would still not meet the requirement. If someo ne certkingdom. has access to the S3 bucket and also has the proper MFA privileges then the audit logs can be edited. Storing the audit logs in an EFS volume and using N etwork File System version 4 (NFSv4) file- locking mechanism is incorrect because the data int egrity of the audit logs can still be compromised i f it is stored in an EFS volume with Network File System ve rsion 4 (NFSv4) file-locking mechanism and hence, not suitable as storage for the files. Although it will provide some sort of security, the file lock c an still be overridden and the audit logs might be edited by so meone else. certkingdom. References: https://docs.aws.amazon.com/amazonglacier/latest/de v/vault-lock.html Certkingdom https://docs.aws.amazon.com/amazonglacier/latest/de v/vault-lock-policy.html https://aws.amazon.com/blogs/aws/glacier-vault-lock / certkingdom. Amazon S3 and S3 Glacier Overview: https://youtube.com/watch?v=1ymyeN2tki4 Check out this Amazon S3 Glacier Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-glacier/", "references": "" }, { "question": ": 87 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A web application is hosted in an Auto Scaling grou p of EC2 instances deployed across multiple Availability Zones behind an Application Load Balan cer. You need to implement an SSL solution for your system to improve its security which is why you req uested an SSL/TLS certificate from a third-party certificate authority (CA). Where can you safely import the SSL/TLS certificate of your application? (Select TWO.)", "options": [ "A. An S3 bucket configured with server-side encry ption with customer-provided encryption", "B. AWS Certificate Manager", "C. A private S3 bucket with versioning enabled", "D. CloudFront" ], "correct": "", "explanation": "Explanation If you got your certificate from a third-party CA, import the certificate into ACM or upload it to the IAM certificate store. Hence, AWS Certificate Manager a nd IAM certificate store are the correct answers. Certkingdom 88 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam ACM lets you import third-party certificates from t he ACM console, as well as programmatically. If ACM is not available in your region, use AWS CLI to upl oad your third-party certificate to the IAM certifi cate store. A private S3 bucket with versioning enabled and an S3 bucket configured with server-side encryption with customer-provided encryption keys (SSE-C) are both incorrect as S3 is not a suitable service to store the SSL certificate. CloudFront is incorrect. Although you can upload ce rtificates to CloudFront, it doesn't mean that you can import SSL certificates on it. You would not be abl e to export the certificate that you have loaded in CloudFront nor assign them to your EC2 or ELB insta nces as it would be tied to a single CloudFront distribution.", "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/cnames-and-https- procedures.html#cnames-and-https-uploading-certific ates Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ AWS Security Services Overview - Secrets Manager, A CM, Macie: https://youtube.com/watch?v=ogVamzF2Dzk Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: Certkingdom https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" }, { "question": ": A company has a web application hosted in an On-Dem and EC2 instance. You are creating a shell script that needs the instance's public and private IP add resses. What is the best way to get the instance's associat ed IP addresses which your shell script can use?", "options": [ "A. By using a Curl or Get Command to get the late st metadata information from", "B. By using a CloudWatch metric.", "C. By using a Curl or Get Command to get the late st user data information from", "D. By using IAM." ], "correct": "A. By using a Curl or Get Command to get the late st metadata information from", "explanation": "Explanation Instance metadata is data about your EC2 instance t hat you can use to configure or manage the running instance. Because your instance metadata is availab le from your running instance, you do not need to u se the Amazon EC2 console or the AWS CLI. This can be helpful when you're writing scripts to run from your instance. For example, you can access the loca l IP address of your instance from instance metadat a to manage a connection to an external application. Certkingdom To view the private IPv4 address, public IPv4 addre ss, and all other categories of instance metadata f rom within a running instance, use the following URL: http://169.254.169.254/latest/meta-data/", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ ec2-instance-metadata.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ 90 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" }, { "question": ": A company needs to integrate the Lightweight Direct ory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IA M. The identity store which is currently being used is not compatible with SAML. Which of the following provides the most valid appr oach to implement the integration?", "options": [ "A. Use an IAM policy that references the LDAP ide ntifiers and AWS credentials.", "B. Use AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP.", "C. Develop an on-premises custom identity broker application and use STS to issue short-lived", "D. Use IAM roles to rotate the IAM credentials wh enever LDAP credentials are updated." ], "correct": "C. Develop an on-premises custom identity broker application and use STS to issue short-lived", "explanation": "Explanation If your identity store is not compatible with SAML 2.0 then you can build a custom identity broker application to perform a similar function. The brok er application authenticates users, requests tempor ary credentials for users from AWS, and then provides t hem to the user to access AWS resources. Certkingdom The application verifies that employees are signed into the existing corporate network's identity and authentication system, which might use LDAP, Active Directory, or another system. The identity broker application then obtains temporary security credent ials for the employees. To get temporary security credentials, the identity broker application calls either AssumeRole or GetFederationToken to obtain temporary security cre dentials, depending on how you want to manage the policies for users and when the temporary credentia ls should expire. The call returns temporary securi ty credentials consisting of an AWS access key ID, a s ecret access key, and a session token. The identity broker application makes these temporary security c redentials available to the internal company applic ation. The app can then use the temporary credentials to m ake calls to AWS directly. The app caches the credentials until they expire, and then requests a new set of temporary credentials. 91 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Using an IAM policy that references the LDAP identi fiers and AWS credentials is incorrect because using an IAM policy is not enough to integrate your LDAP service to IAM. You need to use SAML, STS, or a custom identity broker. Using AWS Single Sign-On (SSO) service to enable si ngle sign-on between AWS and your LDAP is incorrect because the scenario did not require SSO and in addition, the identity store that you are us ing is not SAML-compatible. Using IAM roles to rotate the IAM credentials whene ver LDAP credentials are updated is incorrect because manually rotating the IAM credentials is no t an optimal solution to integrate your on-premises and Certkingdom VPC network. You need to use SAML, STS, or a custom identity broker. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_common-scenarios_federated-users.html https://aws.amazon.com/blogs/aws/aws-identity-and-a ccess-management-now-with-identity-federation/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/", "references": "" }, { "question": ": 92 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A company plans to deploy an application in an Amaz on EC2 instance. The application will perform the following tasks: - Read large datasets from an Amazon S3 bucket. - Execute multi-stage analysis on the datasets. - Save the results to Amazon RDS. During multi-stage analysis, the application will s tore a large number of temporary files in the insta nce storage. As the Solutions Architect, you need to re commend the fastest storage option with high I/O performance for the temporary files. Which of the following options fulfills this requir ement?", "options": [ "A. Configure RAID 1 in multiple instance store vo lumes.", "B. Attach multiple Provisioned IOPS SSD volumes i n the instance.", "C. Configure RAID 0 in multiple instance store vo lumes.", "D. Enable Transfer Acceleration in Amazon S3." ], "correct": "C. Configure RAID 0 in multiple instance store vo lumes.", "explanation": "Explanation Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or Certkingdom down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic . 93 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam RAID 0 configuration enables you to improve your st orage volumes' performance by distributing the I/O across the volumes in a stripe. Therefore, if you a dd a storage volume, you get the straight addition of throughput and IOPS. This configuration can be impl emented on both EBS or instance store volumes. Since the main requirement in the scenario is stora ge performance, you need to use an instance store volume. It uses NVMe or SATA-based SSD to deliver h igh random I/O performance. This type of storage is a good option when you need storage with very lo w latency, and you don't need the data to persist w hen the instance terminates. Hence, the correct answer is: Configure RAID 0 in m ultiple instance store volumes. Certkingdom The option that says: Enable Transfer Acceleration in Amazon S3 is incorrect because S3 Transfer Acceleration is mainly used to speed up the transfe r of gigabytes or terabytes of data between clients and an S3 bucket. The option that says: Configure RAID 1 in multiple instance volumes is incorrect because RAID 1 configuration is used for data mirroring. You need to configure RAID 0 to improve the performance of your storage volumes. The option that says: Attach multiple Provisioned I OPS SSD volumes in the instance is incorrect because persistent storage is not needed in the sce nario. Also, instance store volumes have greater I/ O performance than EBS volumes. References: 94 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /InstanceStorage.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /raid-config.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A Solutions Architect is working for a large global media company with multiple office locations all around the world. The Architect is instructed to bu ild a system to distribute training videos to all e mployees. Using CloudFront, what method would be used to serv e content that is stored in S3, but not publicly accessible from S3 directly?", "options": [ "A. Create an Origin Access Identity (OAI) for Clo udFront and grant access to the objects in", "B. Create a web ACL in AWS WAF to block any publi c S3 access and attach it to the Amazon", "C. Create an Identity and Access Management (IAM) user for CloudFront and grant access to", "D. Create an S3 bucket policy that lists the Clou dFront distribution ID as the principal and the" ], "correct": "A. Create an Origin Access Identity (OAI) for Clo udFront and grant access to the objects in", "explanation": "Explanation Certkingdom When you create or update a distribution in CloudFr ont, you can add an origin access identity (OAI) an d automatically update the bucket policy to give the origin access identity permission to access your bu cket. Alternatively, you can choose to manually change th e bucket policy or change ACLs, which control permissions on individual objects in your bucket. 95 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can update the Amazon S3 bucket policy using ei ther the AWS Management Console or the Amazon S3 API: - Grant the CloudFront origin access identity the a pplicable permissions on the bucket. - Deny access to anyone that you don't want to have access using Amazon S3 URLs. Hence, the correct answer is: Create an Origin Acce ss Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. The option that says: Create an Identity and Access Management (IAM) user for CloudFront and Certkingdom grant access to the objects in your S3 bucket to th at IAM user is incorrect because you cannot directl y create an IAM User for a specific Amazon CloudFront distribution. You have to use an origin access identity (OAI) instead. The option that says: Create an S3 bucket policy th at lists the CloudFront distribution ID as the principal and the target bucket as the Amazon Resou rce Name (ARN) is incorrect because setting up an Amazon S3 bucket policy won't suffice. You have to first create an OAI in C loudFront and use that OAI as an authorized user in your Amazon S3 bucket. The option that says: Create a web ACL in AWS WAF t o block any public S3 access and attach it to the Amazon CloudFront distribution is incorrect bec ause AWS WAF is primarily used to protect your applications from common web vulnerabilities and no t for ensuring exclusive access to CloudFront. 96 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/private-content-restricting- access-to-s3.html#private-content-granting-permissi ons-to-oai Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/ S3 Pre-signed URLs vs CloudFront Signed URLs vs Ori gin Access Identity (OAI) https://tutorialsdojo.com/s3-pre-signed-urls-vs-clo udfront-signed-urls-vs-origin-access-identity-oai/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" }, { "question": ": A company is looking to store their confidential fi nancial files in AWS which are accessed every week. The Architect was instructed to set up the storage system which uses envelope encryption and automates key rotation. It should also provide an audit trail that shows who used the encryption key and by whom for security purposes. Which combination of actions should the Architect i mplement to satisfy the requirement in the most cos t- effective way? (Select TWO.)", "options": [ "A. Use Amazon S3 Glacier Deep Archive to store th e data.", "B. Use Amazon S3 to store the data.", "C. Amazon Certificate Manager", "D. Configure Server-Side Encryption with AWS KMS- Managed Keys (SSE-KMS)." ], "correct": "", "explanation": "Explanation Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a servi ce that combines secure, highly available hardware and software to provide a key management s ystem scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs) to encrypt your Ama zon S3 objects. SSE-KMS encrypts only the 97 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam object data. Any object metadata is not encrypted. If you use customer-managed CMKs, you use AWS KMS via the AWS Management Console or AWS KMS APIs to centrally create encryption keys, define the policies that control how keys can be used, and audit key usage to prove that they are being used correctly. You can use these keys to protect your d ata in Amazon S3 buckets. A customer master key (CMK) is a logical representa tion of a master key. The CMK includes metadata, such as the key ID, creation date, description, and key state. The CMK also contains the key material used to encrypt and decrypt data. You can use a CMK to e ncrypt and decrypt up to 4 KB (4096 bytes) of data. Typically, you use CMKs to generate, encrypt, and d ecrypt the data keys that you use outside of AWS KMS to encrypt your data. This strategy is known as envelope encryption. You have three mutually exclusive options depending on how you choose to manage the encryption keys: Use Server-Side Encryption with Amazon S3-Managed K eys (SSE-S3) Each object is encrypted with Certkingdom a unique key. As an additional safeguard, it encryp ts the key itself with a master key that it regular ly rotates. Amazon S3 server-side encryption uses one of the st rongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data . Use Server-Side Encryption with Customer Master Key s (CMKs) Stored in AWS Key Management Service (SSE-KMS) Similar to SSE-S3, but with some additional benefits and charges for using this service. There are separate permissions for the use of a CMK that provides added protection against unauthorized access of your objects in Amazon S3. S SE-KMS also provides you with an audit trail that shows when your CMK was used and by whom. Additiona lly, you can create and manage customer- managed CMKs or use AWS managed CMKs that are uniqu e to you, your service, and your Region. Use Server-Side Encryption with Customer-Provided K eys (SSE-C) You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your objec ts. 98 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In the scenario, the company needs to store financi al files in AWS which are accessed every week and t he solution should use envelope encryption. This requi rement can be fulfilled by using an Amazon S3 configured with Server-Side Encryption with AWS KMS -Managed Keys (SSE-KMS). Hence, using Amazon S3 to store the data and configuring Server- Side Encryption with AWS KMS-Managed Keys (SSE-KMS) are the correct answers. Using Amazon S3 Glacier Deep Archive to store the d ata is incorrect. Although this provides the most cost-effective storage solution, it is not the appr opriate service to use if the files being stored ar e frequently accessed every week. Configuring Server-Side Encryption with Customer-Pr ovided Keys (SSE-C) and configuring Server- Side Encryption with Amazon S3-Managed Keys (SSE-S3 ) are both incorrect. Although you can configure automatic key rotation, these two do not provide you with an audit trail that shows when you r CMK was used and by whom, unlike Server-Side Encryp tion with AWS KMS-Managed Keys (SSE-KMS). References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ser v-side-encryption.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Usi ngKMSEncryption.html https://docs.aws.amazon.com/kms/latest/developergui de/services-s3.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": Certkingdom A tech startup has recently received a Series A rou nd of funding to continue building their mobile for ex trading application. You are hired to set up their cloud architecture in AWS and to implement a highly available, fault tolerant system. For their databas e, they are using DynamoDB and for authentication, they have chosen to use Cognito. Since the mobile applic ation contains confidential financial transactions, there is a requirement to add a second authentication met hod that doesn't rely solely on user name and passw ord. How can you implement this in AWS?", "options": [ "A. Add a new IAM policy to a user pool in Cognito .", "B. Add multi-factor authentication (MFA) to a use r pool in Cognito to protect the identity of", "C. Develop a custom application that integrates w ith Cognito that implements a second layer of", "D. Integrate Cognito with Amazon SNS Mobile Push to allow additional authentication via" ], "correct": "B. Add multi-factor authentication (MFA) to a use r pool in Cognito to protect the identity of", "explanation": "Explanation You can add multi-factor authentication (MFA) to a user pool to protect the identity of your users. MF A adds a second authentication method that doesn't re ly solely on user name and password. You can choose to use SMS text messages, or time-based one-time (TOTP ) passwords as second factors in signing in your certkingdom. users. You can also use adaptive authentication wit h its risk-based model to predict when you might ne ed another authentication factor. It's part of the use r pool advanced security features, which also inclu de protections against compromised credentials.", "references": "https://docs.aws.amazon.com/cognito/latest/develope rguide/managing-security.html certkingdom." }, { "question": ": A company has an OLTP (Online Transactional Process ing) application that is hosted in an Amazon ECS cluster using the Fargate launch type. It has an Am azon RDS database that stores data of its productio n website. The Data Analytics team needs to run queri es against the database to track and audit all user transactions. These query operations against the pr oduction database must not impact application performance in any way. certkingdom. Which of the following is the MOST suitable and cos t-effective solution that you should implement? Certkingdom", "options": [ "A. Upgrade the instance type of the RDS database to a large instance.", "B. Set up a new Amazon Redshift database cluster. Migrate the product database into Redshift", "C. Set up a new Amazon RDS Read Replica of the pr oduction database. Direct the Data", "D. Set up a Multi-AZ deployments configuration of your production database in RDS. Direct" ], "correct": "C. Set up a new Amazon RDS Read Replica of the pr oduction database. Direct the Data", "explanation": "Explanation 100 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon RDS Read Replicas provide enhanced performan ce and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB ins tance for read-heavy database workloads. You can create one or more replicas of a given sour ce DB Instance and serve high-volume application re ad traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone D B instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle and PostgreSQL, as w ell as Amazon Aurora. certkingdom. certkingdom. You can reduce the load on your source DB instance by routing read queries from your applications to t he read replica. These replicas allow you to elastical ly scale out beyond the capacity constraints of a s ingle DB instance for read-heavy database workloads. Because read replicas can be promoted to master sta tus, they are useful as part of a sharding certkingdom. implementation. To shard your database, add a read replica and promote it to master status, then, from each Certkingdom of the resulting DB Instances, delete the data that belongs to the other shard. Hence, the correct answer is: Set up a new Amazon R DS Read Replica of the production database. Direct the Data Analytics team to query the product ion data from the replica. The option that says: Set up a new Amazon Redshift database cluster. Migrate the product database certkingdom. into Redshift and allow the Data Analytics team to fetch data from it is incorrect because Redshift is primarily used for OLAP (Online Analytical Processi ng) applications and not for OLTP. The option that says: Set up a Multi-AZ deployments configuration of your production database in RDS. Direct the Data Analytics team to query the pr oduction data from the standby instance is incorrect because you can't directly connect to the standby instance. This is only used in the event o f a database failover when your primary instance encoun tered an outage. certkingdom. 101 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Upgrade the instance type of the RDS database to a large instance is incorrect because this entails a significant amount of cost. Moreover, the production database could still be af fected by the queries done by the Data Analytics team. A b etter solution for this scenario is to use a Read R eplica instead. References: https://aws.amazon.com/caching/database-caching/ https://aws.amazon.com/rds/details/read-replicas/ https://aws.amazon.com/elasticache/ certkingdom. Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A company deployed an online enrollment system data base on a prestigious university, which is hosted i n certkingdom. RDS. The Solutions Architect is required to monitor the database metrics in Amazon CloudWatch to ensur e the availability of the enrollment system. What are the enhanced monitoring metrics that Amazo n CloudWatch gathers from Amazon RDS DB instances which provide more accurate information? (Select TWO.)", "options": [ "A. Database Connections", "B. CPU Utilization", "C. RDS child processes.", "D. Freeable Memory" ], "correct": "", "explanation": "Explanation certkingdom. Amazon RDS provides metrics in real time for the op erating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring sy stem of your choice. CloudWatch gathers metrics about CPU utilization fr om the hypervisor for a DB instance, and Enhanced certkingdom. Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences 102 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam between the measurements, because the hypervisor la yer performs a small amount of work. The difference s can be greater if your DB instances use smaller ins tance classes, because then there are likely more v irtual machines (VMs) that are managed by the hypervisor l ayer on a single physical instance. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. certkingdom. certkingdom. Certkingdom certkingdom. In RDS, the Enhanced Monitoring metrics shown in th e Process List view are organized as follows: RDS child processes Shows a summary of the RDS pro cesses that support the DB instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB instances. Process threads appear nested beneath the parent process. Process threads show CPU utilization only as other metrics are the same for all threads for the process. The console displa ys a maximum of 100 processes and threads. The resu lts are a combination of the top CPU consuming and memo ry consuming processes and threads. If there are certkingdom. more than 50 processes and more than 50 threads, th e console displays the top 50 consumers in each category. This display helps you identify which pro cesses are having the greatest impact on performanc e. RDS processes Shows a summary of the resources use d by the RDS management agent, diagnostics monitoring processes, and other AWS processes that are required to support RDS DB instances. certkingdom. 103 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam OS processes Shows a summary of the kernel and sys tem processes, which generally have minimal impact on performance. CPU Utilization, Database Connections, and Freeable Memory are incorrect because these are just the regular items provided by Amazon RDS Metrics in Clo udWatch. Remember that the scenario is asking for the Enhanced Monitoring metrics. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /monitoring/rds-metricscollected.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/USER_Monitoring.OS.html#USER_Monitori ng.OS.CloudWatchLogs certkingdom. Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ certkingdom.", "references": "" }, { "question": ": An organization plans to use an AWS Direct Connect connection to establish a dedicated connection between its on-premises network and AWS. The organi zation needs to launch a fully managed solution tha t will automate and accelerate the replication of dat a to and from various AWS storage services. Certkingdom Which of the following solutions would you recommen d? certkingdom.", "options": [ "A. Use an AWS Storage Gateway file gateway to sto re and retrieve files directly using the SMB", "B. Use an AWS DataSync agent to rapidly move the data over the Internet.", "C. Use an AWS DataSync agent to rapidly move the data over a service endpoint.", "D. Use an AWS Storage Gateway tape gateway to sto re data on virtual tape cartridges and" ], "correct": "C. Use an AWS DataSync agent to rapidly move the data over a service endpoint.", "explanation": "Explanation certkingdom. 104 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS DataSync allows you to copy large datasets with millions of files, without having to build custom solutions with open source tools or license and man age expensive commercial network acceleration software. You can use DataSync to migrate active da ta to AWS, transfer data to the cloud for analysis and processing, archive data to free up on-premises sto rage capacity, or replicate data to AWS for busines s continuity. AWS DataSync simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Co nnect. DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file serve rs, self-managed object storage, or AWS Snowcone, and Amazon Simple Storage Service (Amazon S3) bucke ts, Amazon EFS file systems, and Amazon FSx for Windows File Server file systems. certkingdom. certkingdom. You deploy an AWS DataSync agent to your on-premise s hypervisor or in Amazon EC2. To copy data to or from an on-premises file server, you download th e agent virtual machine image from the AWS Console Certkingdom and deploy to your on-premises VMware ESXi, Linux K ernel-based Virtual Machine (KVM), or Microsoft Hyper-V hypervisor. To copy data to or from an in-c loud file server, you create an Amazon EC2 instance certkingdom. using a DataSync agent AMI. In both cases the agent must be deployed so that it can access your file s erver using the NFS, SMB protocol, or the Amazon S3 API. To set up transfers between your AWS Snowcone device and AWS storage, use the DataSync agent AMI that comes pre-installed on your device. Since the scenario plans to use AWS Direct Connect for private connectivity between on-premises and certkingdom. AWS, you can use DataSync to automate and accelerat e online data transfers to AWS storage services. Th e AWS DataSync agent will be deployed in your on-prem ises network to accelerate data transfer to AWS. To connect programmatically to an AWS service, you wil l need to use an AWS Direct Connect service endpoint. 105 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answer is: Use an AWS DataSync a gent to rapidly move the data over a service endpoint. The option that says: Use AWS DataSync agent to rap idly move the data over the Internet is incorrect because the organization will be using an AWS Direc t Connect connection for private connectivity. This means that the connection will not pass through the public Internet. The options that say: Use AWS Storage Gateway tape gateway to store data on virtual tape cartridges and asynchronously copy your backups to AWS and Use AWS Storage Gateway file gateway to store and retrieve files directly using the SMB file syst em protocol are both incorrect because, in the scen ario, you need to accelerate the replication of data, and not establish a hybrid cloud storage architecture. AWS Storage Gateway only supports a few AWS storage ser vices as a target based on the type of gateway that you launched. AWS DataSync is more suitable in auto mating and accelerating online data transfers to a variety of AWS storage services. certkingdom. References: https://aws.amazon.com/datasync/faqs/ https://docs.aws.amazon.com/datasync/latest/usergui de/what-is-datasync.html https://docs.aws.amazon.com/general/latest/gr/dc.ht ml certkingdom. AWS DataSync Overview: https://youtube.com/watch?v=uQDVZfj_VEA Check out this AWS DataSync Cheat Sheet: Certkingdom https://tutorialsdojo.com/aws-datasync/ certkingdom.", "references": "" }, { "question": ": A large electronics company is using Amazon Simple Storage Service to store important documents. For reporting purposes, they want to track and log ever y request access to their S3 buckets including the requester, bucket name, request time, request actio n, referrer, turnaround time, and error code inform ation. The solution should also provide more visibility in to the object-level operations of the bucket. certkingdom. Which is the best solution among the following opti ons that can satisfy the requirement?", "options": [ "A. Enable AWS CloudTrail to audit all Amazon S3 b ucket access.", "B. Enable server access logging for all required Amazon S3 buckets.", "C. Enable the Requester Pays option to track acce ss via AWS Billing.", "D. Enable Amazon S3 Event Notifications for PUT a nd POST." ], "correct": "B. Enable server access logging for all required Amazon S3 buckets.", "explanation": "Explanation Amazon S3 is integrated with AWS CloudTrail, a serv ice that provides a record of actions taken by a us er, role, or an AWS service in Amazon S3. CloudTrail ca ptures a subset of API calls for Amazon S3 as event s, including calls from the Amazon S3 console and code calls to the Amazon S3 APIs. AWS CloudTrail logs provide a record of actions tak en by a user, role, or an AWS service in Amazon S3, while Amazon S3 server access logs provide detailed records for the requests that are made to an S3 bucket. certkingdom. certkingdom. Certkingdom certkingdom. For this scenario, you can use CloudTrail and the S erver Access Logging feature of Amazon S3. However, it is mentioned in the scenario that they need deta iled information about every access request sent to the S3 certkingdom. bucket including the referrer and turn-around time information. These two records are not available in CloudTrail. Hence, the correct answer is: Enable server access logging for all required Amazon S3 buckets. 107 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Enable AWS CloudTrail to audi t all Amazon S3 bucket access is incorrect because enabling AWS CloudTrail alone won't give de tailed logging information for object-level access. The option that says: Enabling the Requester Pays o ption to track access via AWS Billing is incorrect because this action refers to AWS billing and not f or logging. The option that says: Enabling Amazon S3 Event Noti fications for PUT and POST is incorrect because we are looking for a logging solution and not an ev ent notification. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/clo udtrail-logging.html#cloudtrail-logging-vs-server- logs https://docs.aws.amazon.com/AmazonS3/latest/dev/Log Format.html https://docs.aws.amazon.com/AmazonS3/latest/dev/Ser verLogs.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A data analytics company has been building its new generation big data and analytics platform on their AWS cloud infrastructure. They need a storage servi ce that provides the scale and performance that the ir big data applications require such as high throughp ut to compute nodes coupled with read-after-write consistency and low-latency file operations. In add ition, their data needs to be stored redundantly ac ross Certkingdom multiple AZs and allows concurrent connections from multiple EC2 instances hosted on multiple AZs. Which of the following AWS storage services will yo u use to meet this requirement?", "options": [ "A. Glacier", "B. S3", "C. EBS", "D. EFS" ], "correct": "D. EFS", "explanation": "Explanation 108 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In this question, you should take note of the two k eywords/phrases: \"file operation\" and \"allows concu rrent connections from multiple EC2 instances\". There are various AWS storage options that you can choose bu t whenever these criteria show up, always consider us ing EFS instead of using EBS Volumes which is mainly used as a \"block\" storage and can only have one connection to one EC2 instance at a time. Amazo n EFS provides the scale and performance required for big data applications that require high throughput to compute nodes coupled with read-after-write consist ency and low-latency file operations. Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazo n Cloud. With a few clicks in the AWS Management Cons ole, you can create file systems that are accessibl e to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) an d supports full file system access semantics (such as strong consistency and file locking). Amazon EFS file systems can automatically scale fro m gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousand s of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provid es consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durab le and highly available. EBS is incorrect because it does not allow concurre nt connections from multiple EC2 instances hosted o n multiple AZs and it does not store data redundantly across multiple AZs by default, unlike EFS. S3 is incorrect because although it can handle conc urrent connections from multiple EC2 instances, it does not have the ability to provide low-latency file op erations, which is required in this scenario. Glacier is incorrect because this is an archiving s torage solution and is not applicable in this scena rio. References: https://docs.aws.amazon.com/efs/latest/ug/performan ce.html Certkingdom https://aws.amazon.com/efs/faq/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Check out this Amazon S3 vs EBS vs EFS Cheat Sheet: https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/ Here's a short video tutorial on Amazon EFS: https://youtu.be/AvgAozsfCrY 109 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company launched an EC2 instance in the newly cre ated VPC. They noticed that the generated instance does not have an associated DNS hostname. Which of the following options could be a valid rea son for this issue?", "options": [ "A. The newly created VPC has an invalid CIDR bloc k.", "B. The DNS resolution and DNS hostname of the VPC configuration should be enabled.", "C. Amazon Route53 is not enabled.", "D. The security group of the EC2 instance needs t o be modified." ], "correct": "B. The DNS resolution and DNS hostname of the VPC configuration should be enabled.", "explanation": "Explanation When you launch an EC2 instance into a default VPC, AWS provides it with public and private DNS hostnames that correspond to the public IPv4 and pr ivate IPv4 addresses for the instance. Certkingdom However, when you launch an instance into a non-def ault VPC, AWS provides the instance with a private DNS hostname only. New instances will only be provi ded with public DNS hostname depending on these two DNS attributes: the DNS resolution and DNS hostnames, t hat you have specified for your VPC, and if your instance has a public IPv4 address. 110 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In this case, the new EC2 instance does not automat ically get a DNS hostname because the DNS resolutio n and DNS hostnames attributes are disabled in the ne wly created VPC. Hence, the correct answer is: The DNS resolution an d DNS hostname of the VPC configuration should be enabled. The option that says: The newly created VPC has an invalid CIDR block is incorrect since it's very unlikely that a VPC has an invalid CIDR block becau se of AWS validation schemes. The option that says: Amazon Route 53 is not enable d is incorrect since Route 53 does not need to be enabled. Route 53 is the DNS service of AWS, but th e VPC is the one that enables assigning of instance hostnames. The option that says: The security group of the EC2 instance needs to be modified is incorrect since security groups are just firewalls for your instanc es. They filter traffic based on a set of security group rules. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-dns.html https://aws.amazon.com/vpc/ Amazon VPC Overview: https://youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: Certkingdom https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company has a global news website hosted in a fle et of EC2 Instances. Lately, the load on the websit e has increased which resulted in slower response tim e for the site visitors. This issue impacts the rev enue of the company as some readers tend to leave the site if it does not load after 10 seconds. Which of the below services in AWS can be used to s olve this problem? (Select TWO.)", "options": [ "A. Use Amazon ElastiCache for the website's in-me mory data store or cache.", "B. Deploy the website to all regions in different VPCs for faster processing.", "C. Use Amazon CloudFront with website as the cust om origin.", "D. For better read throughput, use AWS Storage Ga teway to distribute the content across" ], "correct": "", "explanation": "Explanation The global news website has a problem with latency considering that there are a lot of readers of the site from all parts of the globe. In this scenario, you can use a content delivery network (CDN) which is a geographically distributed group of servers that wo rk together to provide fast delivery of Internet co ntent. And since this is a news website, most of its data are read-only, which can be cached to improve the r ead throughput and avoid repetitive requests from the s erver. Certkingdom In AWS, Amazon CloudFront is the global content del ivery network (CDN) service that you can use and for web caching, Amazon ElastiCache is the suitable service. Hence, the correct answers are: - Use Amazon CloudFront with website as the custom origin. 112 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - Use Amazon ElastiCache for the website's in-memor y data store or cache. The option that says: For better read throughput, u se AWS Storage Gateway to distribute the content across multiple regions is incorrect as AWS Storage Gateway is used for storage. Deploying the website to all regions in different V PCs for faster processing is incorrect as this woul d be costly and totally unnecessary considering that you can use Amazon CloudFront and ElastiCache to improve the performance of the website References: https://aws.amazon.com/elasticache/ http://docs.aws.amazon.com/AmazonCloudFront/latest/ DeveloperGuide/Introduction.html Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/", "references": "" }, { "question": ": A leading IT consulting company has an application which processes a large stream of financial data by an Amazon ECS Cluster then stores the result to a Dyna moDB table. You have to design a solution to detect new entries in the DynamoDB table then automaticall y trigger a Lambda function to run some tests to verify the processed data. What solution can be easily implemented to alert th e Lambda function of new entries while requiring minimal configuration change to your architecture? Certkingdom", "options": [ "A. Enable DynamoDB Streams to capture table activ ity and automatically trigger the Lambda", "B. Use CloudWatch Alarms to trigger the Lambda fu nction whenever a new entry is created in", "C. Use Systems Manager Automation to detect new e ntries in the DynamoDB table then", "D. Invoke the Lambda functions using SNS each tim e that the ECS Cluster successfully" ], "correct": "A. Enable DynamoDB Streams to capture table activ ity and automatically trigger the Lambda", "explanation": "Explanation 113 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon DynamoDB is integrated with AWS Lambda so th at you can create triggers--pieces of code that automatically respond to events in DynamoDB Streams . With triggers, you can build applications that re act to data modifications in DynamoDB tables. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the ta ble is modified, a new record appears in the table' s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. certkingdom. certkingdom. Certkingdom You can create a Lambda function which can perform a specific action that you specify, such as sending a notification or initiating a workflow. For instance , you can set up a Lambda function to simply copy e ach stream record to persistent storage, such as EFS or S3, to create a permanent audit trail of write act ivity in certkingdom. your table. Suppose you have a mobile gaming app that writes to a TutorialsDojoCourses table. Whenever the TopCourse attribute of the TutorialsDojoScores tabl e is updated, a corresponding stream record is writ ten to the table's stream. This event could then trigge r a Lambda function that posts a congratulatory mes sage on a social media network. (The function would simp ly ignore any stream records that are not updates t o TutorialsDojoCourses or that do not modify the TopC ourse attribute.) certkingdom. Hence, enabling DynamoDB Streams to capture table a ctivity and automatically trigger the Lambda function is the correct answer because the requirem ent can be met with minimal configuration change 114 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam using DynamoDB streams which can automatically trig ger Lambda functions whenever there is a new entry. Using CloudWatch Alarms to trigger the Lambda funct ion whenever a new entry is created in the DynamoDB table is incorrect because CloudWatch Alar ms only monitor service metrics, not changes in DynamoDB table data. Invoking the Lambda functions using SNS each time t hat the ECS Cluster successfully processed financial data is incorrect because you don't need to create an SNS topic just to invoke Lambda functi ons. You can enable DynamoDB streams instead to meet the requirement with less configuration. Using Systems Manager Automation to detect new entr ies in the DynamoDB table then automatically invoking the Lambda function for processing is inco rrect because the Systems Manager Automation service is primarily used to simplify common mainte nance and deployment tasks of Amazon EC2 instances and other AWS resources. It does not have the capab ility to detect new entries in a DynamoDB table. References: certkingdom. https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.Lambda.html https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/Streams.html Check out this Amazon DynamoDB cheat sheet: https://tutorialsdojo.com/amazon-dynamodb/ certkingdom.", "references": "" }, { "question": ": Certkingdom A tech company currently has an on-premises infrast ructure. They are currently running low on storage and want to have the ability to extend their storage us ing the AWS cloud. Which AWS service can help them achieve this requir ement? certkingdom.", "options": [ "A. Amazon Storage Gateway", "B. Amazon Elastic Block Storage", "C. Amazon SQS", "D. Amazon EC2" ], "correct": "A. Amazon Storage Gateway", "explanation": "Explanation 115 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS Storage Gateway connects an on-premises softwar e appliance with cloud-based storage to provide seamless integration with data security features be tween your on-premises IT environment and the AWS storage infrastructure. You can use the service to store da ta in the AWS Cloud for scalable and cost-effective storage that helps maintain data security. Amazon EC2 is incorrect since this is a compute ser vice, not a storage service. Amazon Elastic Block Storage is incorrect since EBS is primarily used as a storage of your EC2 instanc es. Amazon SQS is incorrect since this is a message que uing service, and does not extend your on-premises storage capacity.", "references": "http://docs.aws.amazon.com/storagegateway/latest/us erguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: https://youtu.be/pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: Certkingdom https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/" }, { "question": ": There are a few, easily reproducible but confidenti al files that your client wants to store in AWS wit hout worrying about storage capacity. For the first mont h, all of these files will be accessed frequently b ut after that, they will rarely be accessed at all. The old files will only be accessed by developers so there is no set retrieval time requirement. However, the files unde r a specific tdojo-finance prefix in the S3 bucket will be used for post-processing that requires millisecond retrieval time. Given these conditions, which of the following opti ons would be the most cost-effective solution for y our client's storage needs?", "options": [ "A. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix", "B. Store the files in S3 then after a month, chan ge the storage class of the bucket to S3-IA using", "C. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix", "D. Store the files in S3 then after a month, chan ge the storage class of the bucket to Intelligent-" ], "correct": "C. Store the files in S3 then after a month, chan ge the storage class of the tdojo-finance prefix", "explanation": "Explanation Initially, the files will be accessed frequently, a nd S3 is a durable and highly available storage sol ution for that. After a month has passed, the files won't be accessed frequently anymore, so it is a good idea t o use lifecycle policies to move them to a storage class that would have a lower cost for storing them. Certkingdom Since the files are easily reproducible and some of them are needed to be retrieved quickly based on a specific prefix filter (tdojo-finance), S3-One Zone IA would be a good choice for storing them. The ot her files that do not contain such prefix would then be moved to Glacier for low-cost archival. This setup would also be the most cost-effective for the clien t. Hence, the correct answer is: Store the files in S3 then after a month, change the storage class of th e tdojo-finance prefix to One Zone-IA while the remai ning go to Glacier using lifecycle policy. 117 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. Certkingdom The option that says: Storing the files in S3 then after a month, changing the storage class of the bucket to S3-IA using lifecycle policy is incorrect . Although it is valid to move the files to S3-IA, this solution still costs more compared with using a com bination of S3-One Zone IA and Glacier. The option that says: Storing the files in S3 then after a month, changing the storage class of the certkingdom. bucket to Intelligent-Tiering using lifecycle polic y is incorrect. While S3 Intelligent-Tiering can automatically move data between two access tiers (f requent access and infrequent access) when access patterns change, it is more suitable for scenarios where you don't know the access patterns of your da ta. It may take some time for S3 Intelligent-Tiering to an alyze the access patterns before it moves the data to a cheaper storage class like S3-IA which means you ma y still end up paying more in the beginning. In addition, you already know the access patterns of t he files which means you can directly change the st orage class immediately and save cost right away. certkingdom. 118 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Storing the files in S3 then after a month, changing the storage class of the td ojo- finance prefix to S3-IA while the remaining go to G lacier using lifecycle policy is incorrect. Even though S3-IA costs less than the S3 Standard storag e class, it is still more expensive than S3-One Zon e IA. Remember that the files are easily reproducible so you can safely move the data to S3-One Zone IA and in case there is an outage, you can simply generate th e missing data again. References: https://aws.amazon.com/blogs/compute/amazon-s3-adds -prefix-and-suffix-filters-for-lambda-function- triggering https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/AmazonS3/latest/dev/lif ecycle-configuration-examples.html https://aws.amazon.com/s3/pricing Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": To save costs, your manager instructed you to analy ze and review the setup of your AWS cloud infrastructure. You should also provide an estimate of how much your company will pay for all of the A WS resources that they are using. In this scenario, wh ich of the following will incur costs? (Select TWO. )", "options": [ "A. A stopped On-Demand EC2 Instance", "B. Public Data Set", "C. EBS Volumes attached to stopped EC2 Instances", "D. A running EC2 Instance" ], "correct": "", "explanation": "Explanation Billing commences when Amazon EC2 initiates the boo t sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running \"shutdown -h\", o r through instance failure. When you stop an instance , AWS shuts it down but doesn't charge hourly usage for a stopped instance or data transfer fees. Howev er, AWS does charge for the storage of any Amazon EBS volumes. 119 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, a running EC2 Instance and EBS Volumes attac hed to stopped EC2 Instances are the right answers and conversely, a stopped On-Demand EC2 Ins tance is incorrect as there is no charge for a stopped EC2 instance that you have shut down. Using Amazon VPC is incorrect because there are no additional charges for creating and using the VPC itself. Usage charges for other Amazon Web Services , including Amazon EC2, still apply at published ra tes for those resources, including data transfer charge s. Public Data Set is incorrect due to the fact that A mazon stores the data sets at no charge to the comm unity and, as with all AWS services, you pay only for the compute and storage you use for your own applicati ons. References: https://aws.amazon.com/cloudtrail/ https://aws.amazon.com/vpc/faqs https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /using-public-data-sets.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": An automotive company is working on an autonomous v ehicle development and deployment project using AWS. The solution requires High Performance Computi ng (HPC) in order to collect, store and manage massive amounts of data as well as to support deep learning frameworks. The Linux EC2 instances that w ill Certkingdom be used should have a lower latency and higher thro ughput than the TCP transport traditionally used in cloud-based HPC systems. It should also enhance the performance of inter-instance communication and must include an OS-bypass functionality to allow th e HPC to communicate directly with the network interface hardware to provide low-latency, reliable transport functionality. Which of the following is the MOST suitable solutio n that you should implement to achieve the above requirements?", "options": [ "A. Attach an Elastic Network Interface (ENI) on e ach Amazon EC2 instance to accelerate High", "B. Attach an Elastic Network Adapter (ENA) on eac h Amazon EC2 instance to accelerate High", "C. Attach a Private Virtual Interface (VIF) on ea ch Amazon EC2 instance to accelerate High", "D. Attach an Elastic Fabric Adapter (EFA) on each Amazon EC2 instance to accelerate High" ], "correct": "D. Attach an Elastic Fabric Adapter (EFA) on each Amazon EC2 instance to accelerate High", "explanation": "Explanation An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premis es HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud. EFA provides lower and more consistent latency and higher throughput than the TCP transport traditiona lly used in cloud-based HPC systems. It enhances the pe rformance of inter-instance communication that is critical for scaling HPC and machine learning appli cations. It is optimized to work on the existing AW S network infrastructure and it can scale depending o n application requirements. EFA integrates with Libfabric 1.9.0 and it supports Open MPI 4.0.2 and Intel MPI 2019 Update 6 for HPC applications, and Nvidia Collective Communications Library (NCCL) for machine learning applications. certkingdom. certkingdom. Certkingdom certkingdom. The OS-bypass capabilities of EFAs are not supporte d on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elas tic Network Adapter, without the added EFA capabilities. Elastic Network Adapters (ENAs) provide traditional IP networking features that are required to suppor t VPC networking. EFAs provide all of the same tradit ional IP networking features as ENAs, and they also certkingdom. 121 of 130 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam support OS-bypass capabilities. OS-bypass enables H PC and machine learning applications to bypass the operating system kernel and to communicate directly with the EFA device. Hence, the correct answer is to attach an Elastic F abric Adapter (EFA) on each Amazon EC2 instance to accelerate High Performance Computing (HPC). Attaching an Elastic Network Adapter (ENA) on each Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because El astic Network Adapter (ENA) doesn't have OS- bypass capabilities, unlike EFA. Attaching an Elastic Network Interface (ENI) on eac h Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because an Elastic Network Interface (ENI) is simply a logical networking component in a VPC that represen ts a virtual network card. It doesn't have OS-bypas s capabilities that allow the HPC to communicate dire ctly with the network interface hardware to provide low-latency, reliable transport functionality. Attaching a Private Virtual Interface (VIF) on each Amazon EC2 instance to accelerate High Performance Computing (HPC) is incorrect because Pr ivate Virtual Interface just allows you to connect to your VPC resources on your private IP address or en dpoint. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /efa.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /enhanced-networking-ena Check out this Elastic Fabric Adapter (EFA) Cheat S heet: Certkingdom https://tutorialsdojo.com/elastic-fabric-adapter-ef a/", "references": "" }, { "question": "A financial company instructed you to automate the recurring tasks in your department such as patch management, infrastructure selection, and data sync hronization to improve their current processes. You need to have a service which can coordinate multipl e AWS services into serverless workflows. Which of the following is the most cost-effective s ervice to use in this scenario?", "options": [ "A. AWS Step Functions", "B. AWS Lambda", "C. SWF", "D. AWS Batch" ], "correct": "A. AWS Step Functions", "explanation": "Explanation AWS Step Functions provides serverless orchestratio n for modern applications. Orchestration centrally manages a workflow by breaking it into multiple ste ps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications exe cute, Step Functions maintains application state, tracking exactly which workflow step your applicati on is in, and stores an event log of data that is p assed between application components. That means that if networks fail or components hang, your application can pick up right where it left off. Application development is faster and more intuitiv e with Step Functions, because you can define and manage the workflow of your application independent ly from its business logic. Making changes to one does not affect the other. You can easily update an d modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Function s frees your functions and containers from excess cod e, so your applications are faster to write, more r esilient, and easier to maintain. SWF is incorrect because this is a fully-managed st ate tracker and task coordinator service. It does n ot provide serverless orchestration to multiple AWS re sources. AWS Lambda is incorrect because although Lambda is used for serverless computing, it does not provide a direct way to coordinate multiple AWS services in to serverless workflows. AWS Batch is incorrect because this is primarily us ed to efficiently run hundreds of thousands of batc h computing jobs in AWS. Certkingdom", "references": "https://aws.amazon.com/step-functions/features/ Check out this AWS Step Functions Cheat Sheet: https://tutorialsdojo.com/aws-step-functions/ Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS: https://tutorialsdojo.com/amazon-simple-workflow-sw f-vs-aws-step-functions-vs-amazon-sqs/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/ 123 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam" }, { "question": ": A media company is using Amazon EC2, ELB, and S3 fo r its video-sharing portal for filmmakers. They are using a standard S3 storage class to store all high -quality videos that are frequently accessed only d uring the first three months of posting. As a Solutions Architect, what should you do if the company needs to automatically transfer or archive media data from an S3 bucket to Glacier? A. Use Amazon SQS", "options": [ "B. Use Amazon SWF", "C. Use a custom shell script that transfers data from the S3 bucket to Glacier", "D. Use Lifecycle Policies" ], "correct": "D. Use Lifecycle Policies", "explanation": "Explanation You can create a lifecycle policy in S3 to automati cally transfer your data to Glacier. Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. Certkingdom 124 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam These actions can be classified as follows: Transition actions In which you define when object s transition to another storage class. For example, you may choose to transition objects to the STANDAR D_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACI ER storage class one year after creation. Expiration actions In which you specify when the o bjects expire. Then Amazon S3 deletes the expired objects on your behalf.", "references": "https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" }, { "question": ": A company recently migrated their applications to A WS. The Solutions Architect must ensure that the applications are highly available and safe from com mon web security vulnerabilities. Which is the most suitable AWS service to use to mi tigate Distributed Denial of Service (DDoS) attacks from hitting your back-end EC2 instances?", "options": [ "A. AWS WAF", "B. AWS Firewall Manager", "C. AWS Shield", "D. Amazon GuardDuty" ], "correct": "C. AWS Shield", "explanation": "Explanation AWS Shield is a managed Distributed Denial of Servi ce (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides al ways-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Sh ield - Standard and Advanced. All AWS customers benefit from the automatic protec tions of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most co mmon, frequently occurring network and transport layer DDoS attacks that target your web site or app lications. When you use AWS Shield Standard with 125 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks. AWS WAF is incorrect because this is a web applicat ion firewall service that helps protect your web ap ps from common exploits that could affect app availabi lity, compromise security, or consume excessive resources. Although this can help you against DDoS attacks, AWS WAF alone is not enough to fully protect your VPC. You still need to use AWS Shield in this scenario. AWS Firewall Manager is incorrect because this just simplifies your AWS WAF administration and maintenance tasks across multiple accounts and reso urces. Amazon GuardDuty is incorrect because this is just an intelligent threat detection service to protect your AWS accounts and workloads. Using this alone will n ot fully protect your AWS resources against DDoS Certkingdom attacks. References: https://docs.aws.amazon.com/waf/latest/developergui de/waf-which-to-choose.html https://aws.amazon.com/answers/networking/aws-ddos- attack-mitigation/ Check out this AWS Shield Cheat Sheet: https://tutorialsdojo.com/aws-shield/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://youtube.com/watch?v=-1S-RdeAmMo 126 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A customer is transitioning their ActiveMQ messagin g broker service onto the AWS cloud in which they require an alternative asynchronous service that su pports NMS and MQTT messaging protocol. The customer does not have the time and resources neede d to recreate their messaging service in the cloud. The service has to be highly available and should requi re almost no management overhead. Which of the following is the most suitable service to use to meet the above requirement?", "options": [ "A. Amazon MQ", "B. Amazon SNS", "C. AWS Step Functions", "D. Amazon SWF", "A. Set up an Amazon SNS Topic to handle the messa ges.", "B. Set up a default Amazon SQS queue to handle th e messages.", "C. Create an Amazon Kinesis Data Stream to collec t the messages.", "D. Create a pipeline using AWS Data Pipeline to h andle the messages." ], "correct": "C. Create an Amazon Kinesis Data Stream to collec t the messages.", "explanation": "Explanation Two important requirements that the chosen AWS serv ice should fulfill is that data should not go missi ng, is durable, and streams data in the sequence of arr ival. Kinesis can do the job just fine because of i ts architecture. A Kinesis data stream is a set of sha rds that has a sequence of data records, and each d ata record has a sequence number that is assigned by Ki nesis Data Streams. Kinesis can also easily handle the high volume of messages being sent to the service. Amazon Kinesis Data Streams enables real-time proce ssing of streaming big data. It provides ordering o f Certkingdom records, as well as the ability to read and/or repl ay records in the same order to multiple Amazon Kin esis Applications. The Amazon Kinesis Client Library (KC L) delivers all records for a given partition key t o the same record processor, making it easier to build mu ltiple applications reading from the same Amazon Kinesis data stream (for example, to perform counti ng, aggregation, and filtering). Setting up a default Amazon SQS queue to handle the messages is incorrect because although SQS is a valid messaging service, it is not suitable for sce narios where you need to process the data based on the order they were received. Take note that a default queue in SQS is just a standard queue and not a FIF O (First-In-First-Out) queue. In addition, SQS does n ot guarantee that no duplicates will be sent. 129 of 130 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Setting up an Amazon SNS Topic to handle the messag es is incorrect because SNS is a pub-sub messaging service in AWS. SNS might not be capable of handling such a large volume of messages being received and sent at a time. It does not also guara ntee that the data will be transmitted in the same order they were received. Creating a pipeline using AWS Data Pipeline to hand le the messages is incorrect because this is primarily used as a cloud-based data workflow servi ce that helps you process and move data between different AWS services and on-premises data sources . It is not suitable for collecting data from distr ibuted sources such as users, IoT devices, or clickstreams . References: https://docs.aws.amazon.com/streams/latest/dev/intr oduction.html For additional information, read the When should I use Amazon Kinesis Data Streams, and when should I use Amazon SQS? section of the Kinesis Data Strea m FAQ: https://aws.amazon.com/kinesis/data-streams/faqs/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/ Certkingdom 130 of 130 Exam E", "references": "" }, { "question": ": A Solutions Architect is migrating several Windows- based applications to AWS that require a scalable f ile system storage for high-performance computing (HPC) . The storage service must have full support for th e SMB protocol and Windows NTFS, Active Directory (AD ) integration, and Distributed File System (DFS). Which of the following is the MOST suitable storage service that the Architect should use to fulfill t his scenario?", "options": [ "A. Amazon S3 Glacier Deep Archive", "B. Amazon FSx for Windows File Server", "C. AWS DataSync", "D. Amazon FSx for Lustre" ], "correct": "B. Amazon FSx for Windows File Server", "explanation": "Explanation Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feat ure sets for workloads such as Windows-based storag e, high-performance computing (HPC), machine learning, and electronic design automation (EDA). You don't have to worry about managing file servers and storage, as Amazon FSx automates the time- consuming administration tasks such as hardware pro visioning, software configuration, patching, and backups. Amazon FSx integrates the file systems wit h cloud-native AWS services, making them even more useful for a broader set of workloads. Amazon FSx provides you with two file systems to ch oose from: Amazon FSx for Windows File Server for Certkingdom Windows-based applications and Amazon FSx for Lustr e for compute-intensive workloads. . 2 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for \"lift-and-shift\" busi ness-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Li nux instances via the SMB protocol. If you have Linux-b ased applications, Amazon EFS is a cloud-native ful ly managed file system that provides simple, scalable, elastic file storage accessible from Linux instanc es via the NFS protocol. For compute-intensive and fast processing workloads , like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that's optimized fo r performance, with input and output stored on Amazon S3. Hence, the correct answer is: Amazon FSx for Window s File Server. Amazon S3 Glacier Deep Archive is incorrect because this service is primarily used as a secure, durabl e, and extremely low-cost cloud storage for data archi ving and long-term backup. AWS DataSync is incorrect because this service simp ly provides a fast way to move large amounts of dat a online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). Amazon FSx for Lustre is incorrect because this ser vice doesn't support the Windows-based applications as well as Windows servers. References: https://aws.amazon.com/fsx/ https://aws.amazon.com/getting-started/use-cases/hp c/3/ Certkingdom Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/", "references": "" }, { "question": ": A Solutions Architect is setting up configuration m anagement in an existing cloud architecture. The Architect needs to deploy and manage the EC2 instan ces including the other AWS resources using Chef and Puppet. Which of the following is the most suitable service to use in this scenario?", "options": [ "A. AWS OpsWorks", "B. AWS Elastic Beanstalk", "C. AWS CodeDeploy . 3 of 128", "D. AWS CloudFormation" ], "correct": "A. AWS OpsWorks", "explanation": "Explanation AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms th at allow you to use code to automate the configurations of your servers. OpsWorks lets you u se Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazo n EC2 instances or on-premises compute environments. Certkingdom Reference: https://aws.amazon.com/opsworks/ Check out this AWS OpsWorks Cheat Sheet: https://tutorialsdojo.com/aws-opsworks/ Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: . 4 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": The start-up company that you are working for has a batch job application that is currently hosted on an EC2 instance. It is set to process messages f rom a queue created in SQS with default settings. You configured the application to process the messa ges once a week. After 2 weeks, you noticed that not all messages are being processed by the applica tion. What is the root cause of this issue?", "options": [ "A. The SQS queue is set to short-polling.", "B. Missing permissions in SQS.", "C. Amazon SQS has automatically deleted the messages that have been in a queue for more than the", "D. The batch job application is configured to long p olling." ], "correct": "C. Amazon SQS has automatically deleted the messages that have been in a queue for more than the", "explanation": "Explanation Amazon SQS automatically deletes messages that have been in a queue for more than the maximum Certkingdom message retention period. The default message reten tion period is 4 days. Since the queue is configure d to the default settings and the batch job application only processes the messages once a week, the messag es that are in the queue for more than 4 days are dele ted. This is the root cause of the issue. To fix this, you can increase the message retention period to a maximum of 14 days using the SetQueueAttributes action. References: https://aws.amazon.com/sqs/faqs/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-message- lifecycle.html Check out this Amazon SQS Cheat Sheet: . 5 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": An organization plans to run an application in a de dicated physical server that doesn't use virtualiza tion. The application data will be stored in a storage so lution that uses an NFS protocol. To prevent data l oss, you need to use a durable cloud storage service to store a copy of your data. Which of the following is the most suitable solutio n to meet the requirement?", "options": [ "A. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure", "B. Use AWS Storage Gateway with a gateway VM appl iance for your compute resources.", "D. Use an AWS Storage Gateway hardware appliance for your compute resources. Configure" ], "correct": "", "explanation": "Explanation AWS Storage Gateway is a hybrid cloud storage servi ce that gives you on-premises access to virtually unlimited cloud storage by linking it to S3. Storag e Gateway provides 3 types of storage solutions for your Certkingdom on-premises applications: file, volume, and tape ga teways. The AWS Storage Gateway Hardware Appliance is a physical, standalone, validated serv er configuration for on-premises deployments. The AWS Storage Gateway Hardware Appliance is a phy sical hardware appliance with the Storage Gateway software preinstalled on a validated server configuration. The hardware appliance is a high- performance 1U server that you can deploy in your d ata center, or on-premises inside your corporate firewall. When you buy and activate your hardware a ppliance, the activation process associates your . 6 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam hardware appliance with your AWS account. After act ivation, your hardware appliance appears in the console as a gateway on the Hardware page. You can configure your hardware appliance as a file gateway , tape gateway, or volume gateway type. The procedure that you use to deploy and activate these gateway types on a hardware appliance is the same as on a v irtual platform. Since the company needs to run a dedicated physical appliance, you can use an AWS Storage Gateway Hardware Appliance. It comes pre-loaded with Storag e Gateway software, and provides all the required resources to create a file gateway. A file gateway can be configured to store and retrieve objects in Amazon S3 using the protocols NFS and SMB. Hence, the correct answer in this scenario is: Use an AWS Storage Gateway hardware appliance for your compute resources. Configure File Gateway to s tore the application data and create an Amazon S3 bucket to store a backup of your data. The option that says: Use AWS Storage Gateway with a gateway VM appliance for your compute resources. Configure File Gateway to store the appl ication data and backup data is incorrect because as per the scenario, the company needs to use an on -premises hardware appliance and not just a Virtual Machine (VM). The options that say: Use an AWS Storage Gateway ha rdware appliance for your compute resources. Configure Volume Gateway to store the application d ata and backup data and Use an AWS Storage Gateway hardware appliance for your compute resourc es. Configure Volume Gateway to store the application data and create an Amazon S3 bucket to store a backup of your data are both incorrect. As per the scenario, the requirement is a file system that uses an NFS protocol and not iSCSI devices. Am ong the AWS Storage Gateway storage solutions, only fil e gateway can store and retrieve objects in Amazon S3 using the protocols NFS and SMB. Certkingdom References: https://docs.aws.amazon.com/storagegateway/latest/u serguide/hardware-appliance.html https://docs.aws.amazon.com/storagegateway/latest/u serguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", "references": "" }, { "question": ": . 7 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A leading media company has recently adopted a hybr id cloud architecture which requires them to migrat e their application servers and databases in AWS. One of their applications requires a heterogeneous dat abase migration in which you need to transform your on-pr emises Oracle database to PostgreSQL in AWS. This entails a schema and code transformation before the proper data migration starts. Which of the following options is the most suitable approach to migrate the database in AWS?", "options": [ "A. Use Amazon Neptune to convert the source schem a and code to match that of the target", "B. First, use the AWS Schema Conversion Tool to c onvert the source schema and application", "C. Heterogeneous database migration is not suppor ted in AWS. You have to transform your", "D. Configure a Launch Template that automatically converts the source schema and code to" ], "correct": "", "explanation": "Explanation AWS Database Migration Service helps you migrate da tabases to AWS quickly and securely. The source database remains fully operational during the migra tion, minimizing downtime to applications that rely on the database. The AWS Database Migration Service ca n migrate your data to and from most widely used commercial and open-source databases. Certkingdom AWS Database Migration Service can migrate your dat a to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Ama zon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database. It can also move data between SQL, NoSQL, and text based targets. In heterogeneous database migrations the source and target databases engines are different, like in th e case of Oracle to Amazon Aurora, Oracle to PostgreSQL, o r Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and da tabase code of source and target databases can be q uite different, requiring a schema and code transformati on before the data migration starts. That makes heterogeneous migrations a two step process. First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type . 8 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam conversions will automatically be done by the AWS D atabase Migration Service during the migration. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS. The option that says: Configure a Launch Template t hat automatically converts the source schema and code to match that of the target database. Then , use the AWS Database Migration Service to migrate data from the source database to the target database is incorrect because Launch templates are primarily used in EC2 to enable you to store launch parameters so that you do not have to specify them every time you launch an instance. The option that says: Use Amazon Neptune to convert the source schema and code to match that of the target database in RDS. Use the AWS Batch to effect ively migrate the data from the source database to the target database in a batch process is incorr ect because Amazon Neptune is a fully-managed graph database service and not a suitable service to use to convert the source schema. AWS Batch is not a database migration service and hence, it is not suitable to be used in this sc enario. You should use the AWS Schema Conversion To ol and AWS Database Migration Service instead. The option that says: Heterogeneous database migrat ion is not supported in AWS. You have to transform your database first to PostgreSQL and the n migrate it to RDS is incorrect because heterogeneous database migration is supported in AW S using the Database Migration Service. References: https://aws.amazon.com/dms/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-launch-templates.html Certkingdom https://aws.amazon.com/batch/ Check out this AWS Database Migration Service Cheat Sheet: https://tutorialsdojo.com/aws-database-migration-se rvice/ AWS Migration Services Overview: https://www.youtube.com/watch?v=yqNBkFMnsL8", "references": "" }, { "question": ":A company has both on-premises data center as well as AWS cloud infrastructure. They store their graph ics, audios, videos, and other multimedia assets primari ly in their on-premises storage server and use an S 3 . 9 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Standard storage class bucket as a backup. Their da ta is heavily used for only a week (7 days) but aft er that period, it will only be infrequently used by their customers. The Solutions Architect is instructed to save storage costs in AWS yet maintain the ability to fe tch a subset of their media assets in a matter of m inutes for a surprise annual data audit, which will be con ducted on their cloud storage. Which of the following are valid options that the S olutions Architect can implement to meet the above requirement? (Select TWO.) certkingdom.", "options": [ "A. Set a lifecycle policy in the bucket to transi tion the data to S3 Glacier Deep Archive storage", "B. Set a lifecycle policy in the bucket to transi tion the data to S3 - Standard IA storage class", "C. Set a lifecycle policy in the bucket to transi tion the data to Glacier after one week (7 days).", "D. Set a lifecycle policy in the bucket to transi tion to S3 - Standard IA after 30 days" ], "correct": "", "explanation": "Explanation You can add rules in a lifecycle configuration to t ell Amazon S3 to transition objects to another Amaz on S3 certkingdom. storage class. For example: When you know that obje cts are infrequently accessed, you might transition them to the STANDARD_IA storage class. Or transitio n your data to the GLACIER storage class in case you want to archive objects that you don't need to access in real time. In a lifecycle configuration, you can define rules to transition objects from one storage class to ano ther to Certkingdom save on storage costs. When you don't know the acce ss patterns of your objects or your access patterns are changing over time, you can transition the objects to the INTELLIGENT_TIERING storage class for certkingdom. automatic cost savings. The lifecycle storage class transitions have a cons traint when you want to transition from the STANDAR D storage classes to either STANDARD_IA or ONEZONE_IA . The following constraints apply: - For larger objects, there is a cost benefit for t ransitioning to STANDARD_IA or ONEZONE_IA. Amazon S3 does not transition objects that are smaller tha n 128 KB to the STANDARD_IA or ONEZONE_IA certkingdom. storage classes because it's not cost effective. - Objects must be stored at least 30 days in the cu rrent storage class before you can transition them to STANDARD_IA or ONEZONE_IA. For example, you cannot create a lifecycle rule to transition objects to the STANDARD_IA storage class one day after you cre ate them. Amazon S3 doesn't transition objects . 10 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam within the first 30 days because newer objects are often accessed more frequently or deleted sooner th an is suitable for STANDARD_IA or ONEZONE_IA storage. - If you are transitioning noncurrent objects (in v ersioned buckets), you can transition only objects that are at least 30 days noncurrent to STANDARD_IA or ONEZO NE_IA storage. Certkingdom Since there is a time constraint in transitioning o bjects in S3, you can only change the storage class of your objects from S3 Standard storage class to STANDARD_ IA or ONEZONE_IA storage after 30 days. This limitation does not apply on INTELLIGENT_TIERING, G LACIER, and DEEP_ARCHIVE storage class. In addition, the requirement says that the media as sets should be fetched in a matter of minutes for a surprise annual data audit. This means that the ret rieval will only happen once a year. You can use expedited retrievals in Glacier which will allow yo u to quickly access your data (within 15 minutes) w hen occasional urgent requests for a subset of archives are required. . 11 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In this scenario, you can set a lifecycle policy in the bucket to transition to S3 - Standard IA after 30 days or alternatively, you can directly transition your data to Glacier after one week (7 days). Hence, the following are the correct answers: - Set a lifecycle policy in the bucket to transitio n the data from Standard storage class to Glacier after one week (7 days). - Set a lifecycle policy in the bucket to transitio n to S3 - Standard IA after 30 days. Setting a lifecycle policy in the bucket to transit ion the data to S3 - Standard IA storage class afte r one week (7 days) and setting a lifecycle policy in the bucket to transition the data to S3 - One Zone - Infrequent Access storage class after one week (7 d ays) are both incorrect because there is a constrai nt in S3 that objects must be stored at least 30 days in the current storage class before you can transition them to STANDARD_IA or ONEZONE_IA. You cannot create a life cycle rule to transition objects to either STANDARD_IA or ONEZONE_IA storage class 7 days afte r you create them because you can only do this after the 30-day period has elapsed. Hence, th ese options are incorrect. Setting a lifecycle policy in the bucket to transit ion the data to S3 Glacier Deep Archive storage cla ss after one week (7 days) is incorrect because althou gh DEEP_ARCHIVE storage class provides the most cost-effective storage option, it does not have the ability to do expedited retrievals, unlike Glacier . In the event that the surprise annual data audit happens, it may take several hours before you can retrieve y our data. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/lif ecycle-transition-general-considerations.html Certkingdom https://docs.aws.amazon.com/AmazonS3/latest/dev/res toring-objects.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A Solutions Architect is working for a fast-growing startup that just started operations during the pa st 3 months. They currently have an on-premises Active D irectory and 10 computers. To save costs in procuring physical workstations, they decided to de ploy virtual desktops for their new employees in a virtual private cloud in AWS. The new cloud infrast ructure should leverage the existing security contr ols in AWS but can still communicate with their on-premise s network. . 12 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Which set of AWS services will the Architect use to meet these requirements?", "options": [ "A. AWS Directory Services, VPN connection, and Am azon S3", "B. AWS Directory Services, VPN connection, and AW S Identity and Access Management", "C. AWS Directory Services, VPN connection, and Cl assicLink", "D. AWS Directory Services, VPN connection, and Am azon Workspaces" ], "correct": "D. AWS Directory Services, VPN connection, and Am azon Workspaces", "explanation": "Explanation For this scenario, the best answer is: AWS Director y Services, VPN connection, and Amazon Workspaces. certkingdom. certkingdom. Certkingdom certkingdom. First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-pr emises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual deskt ops in your VPC. certkingdom. References: https://aws.amazon.com/directoryservice/ . 13 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpn-connections.html https://aws.amazon.com/workspaces/ AWS Identity Services Overview: https://www.youtube.com/watch?v=AIdUw0i8rr0 Check out these cheat sheets on AWS Directory Servi ce, Amazon VPC, and Amazon WorkSpaces: certkingdom. https://tutorialsdojo.com/aws-directory-service/ https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A health organization is using a large Dedicated EC 2 instance with multiple EBS volumes to host its he alth certkingdom. records web application. The EBS volumes must be en crypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability A ct) standard. In EBS encryption, what service does AWS use to sec ure the volume's data at rest? (Select TWO.)", "options": [ "A. By using your own keys in AWS Key Management S ervice (KMS). certkingdom.", "B. By using S3 Server-Side Encryption.", "C. By using the SSL certificates provided by the AWS Certificate Manager (ACM).", "D. By using a password stored in CloudHSM." ], "correct": "", "explanation": "Explanation Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your da ta using Amazon-managed keys, or keys you create an d manage using the AWS Key Management Service (KMS). The encryption occurs on the servers that host certkingdom. EC2 instances, providing encryption of data as it m oves between EC2 instances and EBS storage. Hence, the correct answers are: using your own keys in AWS Key Management Service (KMS) and using Amazon-managed keys in AWS Key Management Ser vice (KMS). . 14 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Using S3 Server-Side Encryption and using S3 Client -Side Encryption are both incorrect as these relate only to S3. Using a password stored in CloudHSM is incorrect as you only store keys in CloudHSM and not passwords. Using the SSL certificates provided by the AWS Cert ificate Manager (ACM) is incorrect as ACM only provides SSL certificates and not data encryption o f EBS Volumes.", "references": "certkingdom. https://aws.amazon.com/ebs/faqs/ Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" }, { "question": ": certkingdom. A multimedia company needs to deploy web services t o an AWS region that they have never used before. The company currently has an IAM role for its Amazo n EC2 instance that permits the instance to access Amazon DynamoDB. They want their EC2 instances in t he new region to have the exact same privileges. What should be done to accomplish this? certkingdom.", "options": [ "A. Assign the existing IAM role to instances in t he new region.", "B. Duplicate the IAM role and associated policies to the new region and attach it to the", "C. In the new Region, create a new IAM role and a ssociated policies then assign it to the new", "D. Create an Amazon Machine Image (AMI) of the in stance and copy it to the new region." ], "correct": "A. Assign the existing IAM role to instances in t he new region.", "explanation": "Explanation In this scenario, the company has an existing IAM r ole hence you don't need to create a new one. IAM roles are global services that are available to all regions hence, all you have to do is assign the ex isting IAM role to the instance in the new region. certkingdom. . 15 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. The option that says: In the new Region, create a n ew IAM role and associated policies then assign it to the new instance is incorrect because you don't need to create another IAM role - there is already an existing one. certkingdom. Duplicating the IAM role and associated policies to the new region and attaching it to the instances i s incorrect as you don't need duplicate IAM roles for each region. One IAM role suffices for the instanc es on two regions. Creating an Amazon Machine Image (AMI) of the insta nce and copying it to the new region is incorrect because creating an AMI image does not af fect the IAM role of the instance. certkingdom.", "references": "Certkingdom https://docs.aws.amazon.com/AmazonS3/latest/dev/Not ificationHowTo.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ certkingdom." }, { "question": ": An On-Demand EC2 instance is launched into a VPC su bnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance's security group has an inbound rule to al low SSH from any IP address and does not have any outbo und rules. In this scenario, what are the changes needed to al low SSH connection to the instance? certkingdom.", "options": [ "A. Both the outbound security group and outbound network ACL need to be modified to allow", "B. No action needed. It can already be accessed f rom any IP address using SSH.", "C. The network ACL needs to be modified to allow outbound traffic.", "D. The outbound security group needs to be modifi ed to allow outbound traffic." ], "correct": "C. The network ACL needs to be modified to allow outbound traffic.", "explanation": "Explanation In order for you to establish an SSH connection fro m your home computer to your EC2 instance, you need to do the following: certkingdom. - On the Security Group, add an Inbound Rule to all ow SSH traffic to your EC2 instance. - On the NACL, add both an Inbound and Outbound Rul e to allow SSH traffic to your EC2 instance. The reason why you have to add both Inbound and Out bound SSH rule is due to the fact that Network ACLs are stateless which means that responses to al low inbound traffic are subject to the rules for outbound traffic (and vice versa). In other words, if you only enabled an Inbound rule in NACL, the tr affic certkingdom. can only go in but the SSH response will not go out since there is no Outbound rule. Security groups are stateful which means that if an incoming request is granted, then the outgoing tra ffic will be automatically granted as well, regardless o f the outbound rules. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_ACLs.html certkingdom. Certkingdom https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /authorizing-access-to-an-instance.html", "references": "" }, { "question": ": A company has a web-based ticketing service that ut ilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS qu eue are configured to poll the queue as often as certkingdom. possible to keep end-to-end throughput as high as p ossible. The Solutions Architect noticed that polli ng the queue in tight loops is using unnecessary CPU cycle s, resulting in increased operational costs due to empty responses. In this scenario, what should the Solutions Archite ct do to make the system more cost-effective?", "options": [ "A. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds to", "B. Configure Amazon SQS to use short polling by s etting the", "D. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds" ], "correct": "D. Configure Amazon SQS to use long polling by se tting the ReceiveMessageWaitTimeSeconds", "explanation": "Explanation In this scenario, the application is deployed in a fleet of EC2 instances that are polling messages fr om a certkingdom. single SQS queue. Amazon SQS uses short polling by default, querying only a subset of the servers (bas ed on a weighted random distribution) to determine whe ther any messages are available for inclusion in th e response. Short polling works for scenarios that re quire higher throughput. However, you can also configure the queue to use Long polling instead, to reduce cost. The ReceiveMessageWaitTimeSeconds is the queue attr ibute that determines whether you are using Short or Long polling. By default, its value is zero whic h means it is using Short polling. If it is set to a value certkingdom. greater than zero, then it is Long polling. Hence, configuring Amazon SQS to use long polling b y setting the ReceiveMessageWaitTimeSeconds to a number greater than zero is the correct answer . Quick facts about SQS Long Polling: certkingdom. - Long polling helps reduce your cost of using Amaz on SQS by reducing the number of empty responses Certkingdom when there are no messages available to return in r eply to a ReceiveMessage request sent to an Amazon SQS queue and eliminating false empty responses whe n messages are available in the queue but aren't included in the response. - Long polling reduces the number of empty response s by allowing Amazon SQS to wait until a message is available in the queue before sending a response. U nless the connection times out, the response to the ReceiveMessage request contains at least one of the available messages, up to the maximum number of certkingdom. messages specified in the ReceiveMessage action. - Long polling eliminates false empty responses by querying all (rather than a limited number) of the servers. Long polling returns messages as soon any message becomes available.", "references": "certkingdom. https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-long-polling.html . 18 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/" }, { "question": ": A data analytics company keeps a massive volume of data that they store in their on-premises data cent er. To scale their storage systems, they are looking fo r cloud-backed storage volumes that they can mount using Internet Small Computer System Interface (iSC SI) devices from their on-premises application serv ers. They have an on-site data analytics application tha t frequently accesses the latest data subsets local ly while the older data are rarely accessed. You are require d to minimize the need to scale the on-premises sto rage infrastructure while still providing their web appl ication with low-latency access to the data. certkingdom. Which type of AWS Storage Gateway service will you use to meet the above requirements?", "options": [ "A. Volume Gateway in cached mode", "B. Volume Gateway in stored mode", "C. Tape Gateway", "D. File Gateway" ], "correct": "A. Volume Gateway in cached mode", "explanation": "Explanation In this scenario, the technology company is looking for a storage service that will enable their analy tics application to frequently access the latest data su bsets and not the entire data set (as it was mentio ned that the old data are rarely being used). This requireme nt can be fulfilled by setting up a Cached Volume Certkingdom certkingdom. Gateway in AWS Storage Gateway. By using cached volumes, you can use Amazon S3 as y our primary data storage, while retaining frequentl y accessed data locally in your storage gateway. Cach ed volumes minimize the need to scale your on- premises storage infrastructure, while still provid ing your applications with low-latency access to frequently accessed data. You can create storage vo lumes up to 32 TiB in size and afterward, attach th ese volumes as iSCSI devices to your on-premises applic ation servers. When you write to these volumes, you r certkingdom. gateway stores the data in Amazon S3. It retains th e recently read data in your on-premises storage gateway's cache and uploads buffer storage. Cached volumes can range from 1 GiB to 32 TiB in si ze and must be rounded to the nearest GiB. Each gateway configured for cached volumes can support u p to 32 volumes for a total maximum storage volume of 1,024 TiB (1 PiB). certkingdom. . 19 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam In the cached volumes solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon S3. Hence, the correct ans wer is: Volume Gateway in cached mode. Volume Gateway in stored mode is incorrect because the requirement is to provide low latency access to the frequently accessed data subsets locally. Store d Volumes are used if you need low-latency access t o your entire dataset. Tape Gateway is incorrect because this is just a co st-effective, durable, long-term offsite alternativ e for data archiving, which is not needed in this scenari o. File Gateway is incorrect because the scenario requ ires you to mount volumes as iSCSI devices. File Gateway is used to store and retrieve Amazon S3 obj ects through NFS and SMB protocols. References: certkingdom. https://docs.aws.amazon.com/storagegateway/latest/u serguide/StorageGatewayConcepts.html#volume- gateway-concepts https://docs.aws.amazon.com/storagegateway/latest/u serguide/WhatIsStorageGateway.html AWS Storage Gateway Overview: certkingdom. https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/ Certkingdom Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: certkingdom. https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate-saa-c02/", "references": "" }, { "question": ": An application is hosted in an Auto Scaling group o f EC2 instances. To improve the monitoring process, you have to configure the current capacity to incre ase or decrease based on a set of scaling adjustmen ts. certkingdom. This should be done by specifying the scaling metri cs and threshold values for the CloudWatch alarms t hat trigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use?", "options": [ "A. Step scaling", "B. Simple scaling", "C. Target tracking scaling", "D. Scheduled Scaling" ], "correct": "A. Step scaling", "explanation": "Explanation With step scaling, you choose scaling metrics and t hreshold values for the CloudWatch alarms that trig ger the scaling process as well as define how your scal able target should be scaled when a threshold is in breach for a specified number of evaluation periods . Step scaling policies increase or decrease the cu rrent capacity of a scalable target based on a set of sca ling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm bre ach. After a scaling activity is started, the polic y continues to respond to additional alarms, even whi le a scaling activity is in progress. Therefore, al l alarms certkingdom. that are breached are evaluated by Application Auto Scaling as it receives the alarm messages. certkingdom. When you configure dynamic scaling, you must define how to scale in response to changing demand. For example, you have a web application that currently runs on two instances and you want the CPU utilizat ion of the Auto Scaling group to stay at around 50 perc ent when the load on the application changes. This gives you extra capacity to handle traffic spikes without maintaining an excessive amount of idle resources. You can configure your Auto Scaling group to scale auto matically to meet this need. The policy type determ ines certkingdom. how the scaling action is performed. certkingdom. Certkingdom certkingdom. certkingdom. certkingdom. certkingdom. Amazon EC2 Auto Scaling supports the following type s of scaling policies: . 21 of 128 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Target tracking scaling - Increase or decrease the current capacity of the group based on a target val ue for a specific metric. This is similar to the way that your thermostat maintains the temperature of your h ome you select a temperature and the thermostat does th e rest. Step scaling - Increase or decrease the current cap acity of the group based on a set of scaling adjust ments, known as step adjustments, that vary based on the s ize of the alarm breach. Simple scaling - Increase or decrease the current c apacity of the group based on a single scaling adju stment. If you are scaling based on a utilization metric th at increases or decreases proportionally to the num ber of instances in an Auto Scaling group, then it is reco mmended that you use target tracking scaling polici es. Otherwise, it is better to use step scaling policie s instead. Hence, the correct answer in this scenario is Step Scaling. Target tracking scaling is incorrect because the ta rget tracking scaling policy increases or decreases the current capacity of the group based on a target val ue for a specific metric, instead of a set of scali ng certkingdom. adjustments. Simple scaling is incorrect because the simple scal ing policy increases or decreases the current capac ity of the group based on a single scaling adjustment, ins tead of a set of scaling adjustments. Scheduled Scaling is incorrect because the schedule d scaling policy is based on a schedule that allows you to set your own scaling schedule for predictable lo ad changes. This is not considered as one of the ty pes of certkingdom. dynamic scaling. References: Certkingdom https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scale-based-on-demand.html https://docs.aws.amazon.com/autoscaling/application /userguide/application-auto-scaling-step-scaling- certkingdom. policies.html", "references": "" }, { "question": ": A company troubleshoots the operational issues of t heir cloud architecture by logging the AWS API call history of all AWS resources. The Solutions Archite ct must implement a solution to quickly identify th e most recent changes made to resources in their envi ronment, including creation, modification, and dele tion of AWS resources. One of the requirements is that t he generated log files should be encrypted to avoid any certkingdom. security issues. Which of the following is the most suitable approac h to implement the encryption? . 22 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "A. Use CloudTrail and configure the destination S 3 bucket to use Server Side Encryption (SSE)", "B. Use CloudTrail with its default settings", "C. Use CloudTrail and configure the destination A mazon Glacier archive to use Server-Side", "D. Use CloudTrail and configure the destination S 3 bucket to use Server-Side Encryption" ], "correct": "B. Use CloudTrail with its default settings", "explanation": "Explanation By default, CloudTrail event log files are encrypte d using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an A WS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecyc le rules to archive or delete log files automatically. If you want notifications about log file delivery and certkingdom. validation, you can set up Amazon SNS notifications . certkingdom. Certkingdom certkingdom. certkingdom. . 23 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom Using CloudTrail and configuring the destination Am azon Glacier archive to use Server-Side Encryption (SSE) is incorrect because CloudTrail st ores the log files to S3 and not in Glacier. Take n ote that by default, CloudTrail event log files are alr eady encrypted using Amazon S3 server-side encrypti on (SSE). Using CloudTrail and configuring the destination S3 bucket to use Server-Side Encryption (SSE) is incorrect because CloudTrail event log files are al ready encrypted using the Amazon S3 server-side encryption (SSE) which is why you do not have to do this anymore. . 24 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Use CloudTrail and configure the destination S3 buc ket to use Server Side Encryption (SSE) with AES-128 encryption algorithm is incorrect because C loudtrail event log files are already encrypted usi ng the Amazon S3 server-side encryption (SSE) by defau lt. Additionally, SSE-S3 only uses the AES-256 encryption algorithm and not the AES-128. References: https://docs.aws.amazon.com/awscloudtrail/latest/us erguide/how-cloudtrail-works.html https://aws.amazon.com/blogs/aws/category/cloud-tra il/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/", "references": "" }, { "question": ": A company has an infrastructure that allows EC2 ins tances from a private subnet to fetch objects from Amazon S3 via a NAT Instance. The company's Solutio ns Architect was instructed to lower down the cost incurred by the current solution. How should the Solutions Architect redesign the arc hitecture in the most cost-efficient manner?", "options": [ "A. Remove the NAT instance and create an S3 gatew ay endpoint to access S3 objects.", "B. Remove the NAT instance and create an S3 inter face endpoint to access S3 objects.", "C. Replace the NAT instance with NAT Gateway to a ccess S3 objects.", "D. Use a smaller instance type for the NAT instan ce." ], "correct": "A. Remove the NAT instance and create an S3 gatew ay endpoint to access S3 objects.", "explanation": "Explanation A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without re quiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traff ic between your VPC and the other services does not leave the Amazon network. Endpoints are virtual devices. They are horizontall y scaled, redundant, and highly available VPC components that allow communication between instanc es in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. . 25 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam There are two types of VPC endpoints: interface end points and gateway endpoints. You should create the type of VPC endpoint required by the supported serv ice. As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, whic h use VPC Gateway Endpoint. There is no additional charge for using gateway end points. However, standard charges for data transfer and resource usage still apply. Let's assume you created a NAT gateway and you have an EC2 instance routing to the Internet through th e NAT gateway. Your EC2 instance behind the NAT gatew ay sends a 1 GB file to one of your S3 buckets. Certkingdom The EC2 instance, NAT gateway, and S3 Bucket are in the same region US East (Ohio), and the NAT gateway and EC2 instance are in the same availabili ty zone. Your cost will be calculated as follows: - NAT Gateway Hourly Charge: NAT Gateway is charged on an hourly basis. For example, the rate is $0.045 per hour in this region. - NAT Gateway Data Processing Charge: 1 GB data wen t through NAT gateway. The NAT Gateway Data Processing charge is applied and will result i n a charge of $0.045. - Data Transfer Charge: This is the standard EC2 Da ta Transfer charge. 1 GB data was transferred from the EC2 instance to S3 via the NAT gateway. There w as no charge for the data transfer from the EC2 instance to S3 as it is Data Transfer Out to Amazon EC2 to S3 in the same region. There was also no charge for the data transfer between the NAT Gatewa y and the EC2 instance since the traffic stays in t he . 26 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam same availability zone using private IP addresses. There will be a data transfer charge between your N AT Gateway and EC2 instance if they are in the differe nt availability zone. In summary, your charge will be $0.045 for 1 GB of data processed by the NAT gateway and a charge of $0.045 per hour will always apply once the NAT gate way is provisioned and available. The data transfer has no charge in this example. However, if you send the file to a non-AWS Internet location instead, t here will be a data transfer charge as it is data transf er out from Amazon EC2 to the Internet. To avoid the NAT Gateway Data Processing charge in this example, you could set up a Gateway Type VPC endpoint and route the traffic to/from S3 throu gh the VPC endpoint instead of going through the NA T Gateway. There is no data processing or hourly charges for u sing Gateway Type VPC endpoints. Hence, the correct answer is the option that says: Remove the NAT instance and create an S3 gateway endpoint to access S3 objects. The option that says: Replace the NAT instance with NAT Gateway to access S3 objects is incorrect. A NAT Gateway is just a NAT instance that is managed for you by AWS. It provides less operational management and you pay for the hour that your NAT G ateway is running. This is not the most effective solution since you will still pay for the idle time . The option that says: Use a smaller instance type f or the NAT instance is incorrect. Although this mig ht reduce the cost, it still is not the most cost-effi cient solution. An S3 Gateway endpoint is still the best solution because it comes with no additional charge . The option that says: Remove the NAT instance and c reate an S3 interface endpoint to access S3 objects is incorrect. An interface endpoint is an e lastic network interface with a private IP address from the Certkingdom IP address range of your subnet. Unlike a Gateway e ndpoint, you still get billed for the time your int erface endpoint is running and the GB data it has processe d. From a cost standpoint, using the S3 Gateway endpoint is the most favorable solution. References: https://docs.aws.amazon.com/vpc/latest/privatelink/ vpce-gateway.html https://aws.amazon.com/blogs/architecture/reduce-co st-and-increase-security-with-amazon-vpc-endpoints/ https://aws.amazon.com/vpc/pricing/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ . 27 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": An application is hosted on an EC2 instance with mu ltiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to th e instance to protect the confidential data stored in the volumes. Which of the following statements are true about en crypted Amazon Elastic Block Store volumes? (Select TWO.)", "options": [ "A. Snapshots are automatically encrypted.", "B. All data moving between the volume and the ins tance are encrypted.", "C. Snapshots are not automatically encrypted.", "D. The volumes created from the encrypted snapsho t are not encrypted." ], "correct": "", "explanation": "Explanation Amazon Elastic Block Store (Amazon EBS) provides bl ock level storage volumes for use with EC2 instances. EBS volumes are highly available and rel iable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as stor age volumes that persist independently from the life of the instance. Certkingdom . 28 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam When you create an encrypted EBS volume and attach it to a supported instance type, the following type s of data are encrypted: - Data at rest inside the volume - All data moving between the volume and the instan ce - All snapshots created from the volume Certkingdom - All volumes created from those snapshots Encryption operations occur on the servers that hos t EC2 instances, ensuring the security of both data -at- rest and data-in-transit between an instance and it s attached EBS storage. You can encrypt both the bo ot and data volumes of an EC2 instance. References: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ AmazonEBS.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ . 29 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A Solutions Architect is working for a multinationa l telecommunications company. The IT Manager wants to consolidate their log streams including the acce ss, application, and security logs in one single sy stem. Once consolidated, the company will analyze these logs i n real-time based on heuristics. There will be some time in the future where the company will need to valida te heuristics, which requires going back to data sa mples extracted from the last 12 hours. What is the best approach to meet this requirement?", "options": [ "A. First, configure Amazon Cloud Trail to receive custom logs and then use EMR to apply", "B. First, send all the log events to Amazon SQS t hen set up an Auto Scaling group of EC2", "C. First, set up an Auto Scaling group of EC2 ser vers then store the logs on Amazon S3 then", "D. First, send all of the log events to Amazon Ki nesis then afterwards, develop a client process" ], "correct": "D. First, send all of the log events to Amazon Ki nesis then afterwards, develop a client process", "explanation": "Explanation/Reference: Explanation In this scenario, you need a service that can colle ct, process, and analyze data in real-time hence, t he right service to use here is Amazon Kinesis. Certkingdom Amazon Kinesis makes it easy to collect, process, a nd analyze real-time, streaming data so you can get timely insights and react quickly to new informatio n. Amazon Kinesis offers key capabilities to cost- effectively process streaming data at any scale, al ong with the flexibility to choose the tools that b est suit the requirements of your application. . 30 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine le arning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the process ing can begin. All other options are incorrect since these service s do not have real-time processing capability, unli ke Amazon Kinesis.", "references": "https://aws.amazon.com/kinesis/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" }, { "question": ": A company plans to deploy a Docker-based batch appl ication in AWS. The application will be used to process both mission-critical data as well as non-e ssential batch jobs. Which of the following is the most cost-effective o ption to use in implementing this architecture?", "options": [ "A. Use ECS as the container management service th en set up a combination of Reserved and", "B. Use ECS as the container management service th en set up Reserved EC2 Instances for", "C. Use ECS as the container management service th en set up On-Demand EC2 Instances for", "D. Use ECS as the container management service th en set up Spot EC2 Instances for" ], "correct": "A. Use ECS as the container management service th en set up a combination of Reserved and", "explanation": "Explanation Amazon ECS lets you run batch workloads with manage d or custom schedulers on Amazon EC2 On- Demand Instances, Reserved Instances, or Spot Insta nces. You can launch a combination of EC2 instances to set up a cost-effective architecture depending o n your workload. You can launch Reserved EC2 instances to process the mission-critical data and Spot EC2 instances for processing non-essential bat ch jobs. . 31 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam There are two different charge models for Amazon El astic Container Service (ECS): Fargate Launch Type Model and EC2 Launch Type Model. With Fargate, you pay for the amount of vCPU and memory resources that your containerized application reque sts while for EC2 launch type model, there is no additional charge. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store a nd run your application. You only pay for what you use , as you use it; there are no minimum fees and no upfront commitments. Certkingdom In this scenario, the most cost-effective solution is to use ECS as the container management service t hen set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively. You can use Scheduled Rese rved Instances (Scheduled Instances) which enables you to purchase capacity reservations that recur on a daily, weekly , or monthly basis, with a specified start time and duration, for a one-year term. This will ensure tha t you have an uninterrupted compute capacity to pro cess your mission-critical batch jobs. Hence, the correct answer is the option that says: Use ECS as the container management service then se t up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non- essential batch jobs respectively. Using ECS as the container management service then setting up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because processing the non- . 32 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam essential batch jobs can be handled much cheaper by using Spot EC2 instances instead of Reserved Instances. Using ECS as the container management service then setting up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because an On-Demand instance costs more compared to Reserved and Spot E C2 instances. Processing the non-essential batch jo bs can be handled much cheaper by using Spot EC2 insta nces instead of On-Demand instances. Using ECS as the container management service then setting up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs is incorrect because although this set up provides the cheapest solution among other options, it will not be able to meet the required workload. Using Spot instances to process mission-critical workloads is not suitable since these types of instances can be terminated by AWS at any time, which can affect cri tical processing. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/Welcome.html https://aws.amazon.com/ec2/spot/containers-for-less /get-started/ Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/ AWS Container Services Overview: https://www.youtube.com/watch?v=5QBgDX7O7pw Certkingdom", "references": "" }, { "question": ": A financial analytics application that collects, pr ocesses and analyzes stock data in real-time is usi ng Kinesis Data Streams. The producers continually pus h data to Kinesis Data Streams while the consumers process the data in real time. In Amazon Kinesis, w here can the consumers store their results? (Select TWO.)", "options": [ "A. Glacier Select", "B. Amazon Athena", "C. Amazon Redshift", "D. Amazon S3" ], "correct": "", "explanation": "Explanation In Amazon Kinesis, the producers continually push d ata to Kinesis Data Streams and the consumers process the data in real time. Consumers (such as a custom application running on Amazon EC2, or an Amazon Kinesis Data Firehose delivery stream) can s tore their results using an AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3. Hence, Amazon S3 and Amazon Redshift are the correc t answers. The following diagram illustrates the high-level architecture of Kinesis Data Streams: Glacier Select is incorrect because this is not a s torage service. It is primarily used to run queries directly Certkingdom on data stored in Amazon Glacier, retrieving only t he data you need out of your archives to use for analytics. AWS Glue is incorrect because this is not a storage service. It is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. Amazon Athena is incorrect because this is just an interactive query service that makes it easy to ana lyze data in Amazon S3 using standard SQL. It is not a s torage service where you can store the results proc essed by the consumers.", "references": "http://docs.aws.amazon.com/streams/latest/dev/key-c oncepts.html Amazon Redshift Overview: . 34 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://youtu.be/jlLERNzhHOg Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" }, { "question": ": A client is hosting their company website on a clus ter of web servers that are behind a public-facing load balancer. The client also uses Amazon Route 53 to m anage their public DNS. How should the client configure the DNS zone apex r ecord to point to the load balancer?", "options": [ "A. Create an alias for CNAME record to the load b alancer DNS name.", "B. Create a CNAME record pointing to the load bal ancer DNS name.", "C. Create an A record aliased to the load balance r DNS name.", "D. Create an A record pointing to the IP address of the load balancer." ], "correct": "C. Create an A record aliased to the load balance r DNS name.", "explanation": "Explanation Route 53's DNS implementation connects user request s to infrastructure running inside (and outside) of Amazon Web Services (AWS). For example, if you have multiple web servers running on EC2 instances behind an Elastic Load Balancing load balancer, Rou te 53 will route all traffic addressed to your webs ite (e.g. www.tutorialsdojo.com) to the load balancer D NS name (e.g. elbtutorialsdojo123.elb.amazonaws.com). Certkingdom . 35 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Additionally, Route 53 supports the alias resource record set, which lets you map your zone apex (e.g. tutorialsdojo.com) DNS name to your load balancer D NS name. IP addresses associated with Elastic Load Balancing can change at any time due to scaling or software updates. Route 53 responds to each request for an Alias resource record set with one IP address fo r the load balancer. Creating an A record pointing to the IP address of the load balancer is incorrect. You should be using an Alias record pointing to the DNS name of the loa d balancer since the IP address of the load balance r can change at any time. Certkingdom Creating a CNAME record pointing to the load balanc er DNS name and creating an alias for CNAME record to the load balancer DNS name are inco rrect because CNAME records cannot be created for your zone apex. You should create an al ias record at the top node of a DNS namespace which is also known as the zone apex. For example, if you re gister the DNS name tutorialsdojo.com, the zone ape x is tutorialsdojo.com. You can't create a CNAME reco rd directly for tutorialsdojo.com, but you can crea te an alias record for tutorialsdojo.com that routes t raffic to www.tutorialsdojo.com. References: http://docs.aws.amazon.com/govcloud-us/latest/UserG uide/setting-up-route53-zoneapex-elb.html https://docs.aws.amazon.com/Route53/latest/Develope rGuide/resource-record-sets-choosing-alias-non- alias.html . 36 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ": A company plans to use Route 53 instead of an ELB t o load balance the incoming request to the web application. The system is deployed to two EC2 inst ances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to each instance. Which routing policy would you use?", "options": [ "A. Failover", "B. Weighted", "C. Geolocation", "D. Latency" ], "correct": "B. Weighted", "explanation": "Explanation Weighted routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (portal.tutorialsdojo.com) and choos e how much traffic is routed to each resource. This can be useful for a variety of purposes including l oad balancing and testing new versions of software. You can set a specific percentage of how much traffic w ill be allocated to the resource by specifying the weights. Certkingdom . 37 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam For example, if you want to send a tiny portion of your traffic to one resource and the rest to anothe r resource, you might specify weights of 1 and 255. T he resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/ 256ths (255/1+255). You can gradually change the balance by changing th e weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0. Hence, the correct answer is Weighted. Certkingdom Latency is incorrect because you cannot set a speci fic percentage of traffic for the 2 EC2 instances w ith this routing policy. Latency routing policy is prim arily used when you have resources in multiple AWS Regions and if you need to automatically route traf fic to a specific AWS Region that provides the best latency with less round-trip time. Failover is incorrect because this type is commonly used if you want to set up an active-passive failo ver configuration for your web application. Geolocation is incorrect because this is more suita ble for routing traffic based on the location of yo ur users, and not for distributing a specific percentage of t raffic to two AWS resources.", "references": "http://docs.aws.amazon.com/Route53/latest/Developer Guide/routing-policy.html . 38 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon Route 53 Overview: https://youtu.be/Su308t19ubY Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/" }, { "question": ": A web application is hosted on an EC2 instance that processes sensitive financial information which is launched in a private subnet. All of the data are s tored in an Amazon S3 bucket. Financial information is accessed by users over the Internet. The security t eam of the company is concerned that the Internet connectivity to Amazon S3 is a security risk. In this scenario, what will you do to resolve this security vulnerability in the most cost-effective m anner?", "options": [ "A. Change the web architecture to access the fina ncial data in S3 through an interface VPC", "B. Change the web architecture to access the fina ncial data hosted in your S3 bucket by", "C. Change the web architecture to access the fina ncial data through a Gateway VPC Endpoint.", "D. Change the web architecture to access the fina ncial data in your S3 bucket through a VPN" ], "correct": "C. Change the web architecture to access the fina ncial data through a Gateway VPC Endpoint.", "explanation": "Explanation Certkingdom Take note that your VPC lives within a larger AWS n etwork and the services, such as S3, DynamoDB, RDS, and many others, are located outside of your V PC, but still within the AWS network. By default, t he connection that your VPC uses to connect to your S3 bucket or any other service traverses the public Internet via your Internet Gateway. A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without re quiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Insta nces in your VPC do not require public IP addresses to communicate with resources in the service. Traff ic between your VPC and the other service does not leave the Amazon network. . 39 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam There are two types of VPC endpoints: interface end points and gateway endpoints. You have to create th e type of VPC endpoint required by the supported serv ice. An interface endpoint is an elastic network interfa ce with a private IP address that serves as an entr y point for traffic destined to a supported service. A gate way endpoint is a gateway that is a target for a sp ecified route in your route table, used for traffic destine d to a supported AWS service. Certkingdom Hence, the correct answer is: Change the web archit ecture to access the financial data through a Gateway VPC Endpoint. The option that says: Changing the web architecture to access the financial data in your S3 bucket through a VPN connection is incorrect because a VPN connection still goes through the public Internet. You have to use a VPC Endpoint in this scenario and not VPN, to privately connect your VPC to supporte d AWS services such as S3. . 40 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Changing the web architecture to access the financial data hosted in your S3 bucket by creating a custom VPC endpoint service is incorrect because a \"VPC endpoint service\" is quite different from a \"VPC endpoint\". With the VPC endpoint service, you are the service provider whe re you can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). Other AWS pri ncipals can create a connection from their VPC to y our endpoint service using an interface VPC endpoint. The option that says: Changing the web architecture to access the financial data in S3 through an interface VPC endpoint, which is powered by AWS Pri vateLink is incorrect. Although you can use an Interface VPC Endpoint to satisfy the requirement, this type entails an associated cost, unlike a Gate way VPC Endpoint. Remember that you won't get billed if you use a Gateway VPC endpoint for your Amazon S3 bucket, unlike an Interface VPC endpoint that is billed for hourly usage and data processing charge s. Take note that the scenario explicitly asks for the most cost-effective solution. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-endpoints.html https://docs.aws.amazon.com/vpc/latest/userguide/vp ce-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A news company is planning to use a Hardware Securi ty Module (CloudHSM) in AWS for secure key storage of their web applications. You have launche d the CloudHSM cluster but after just a few hours, a Certkingdom support staff mistakenly attempted to log in as the administrator three times using an invalid passwor d in the Hardware Security Module. This has caused the HSM to be zero ized, which means that the encryption keys on it ha ve been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else. How can you obtain a new copy of the keys that you have stored on Hardware Security Module?", "options": [ "A. Contact AWS Support and they will provide you a copy of the keys.", "B. Restore a snapshot of the Hardware Security Mo dule.", "C. Use the Amazon CLI to get a copy of the keys.", "D. The keys are lost permanently if you did not h ave a copy." ], "correct": "D. The keys are lost permanently if you did not h ave a copy.", "explanation": "Explanation Attempting to log in as the administrator more than twice with the wrong password zeroizes your HSM appliance. When an HSM is zeroized, all keys, certi ficates, and other data on the HSM is destroyed. Yo u can use your cluster's security group to prevent an unauthenticated user from zeroizing your HSM. Amazon does not have access to your keys nor to the credentials of your Hardware Security Module (HSM) and therefore has no way to recover your keys if yo u lose your credentials. Amazon strongly recommends that you use two or more HSMs in separate Availabil ity Zones in any production CloudHSM Cluster to avoid loss of cryptographic keys. Certkingdom Refer to the CloudHSM FAQs for reference: Q: Could I lose my keys if a single HSM instance fa ils? Yes. It is possible to lose keys that were created since the most recent daily backup if the CloudHSM cluster that you are using fails and you are not us ing two or more HSMs. Amazon strongly recommends that you use two or more HSMs, in separate Availabi lity Zones, in any production CloudHSM Cluster to avoid loss of cryptographic keys. Q: Can Amazon recover my keys if I lose my credenti als to my HSM? No. Amazon does not have access to your keys or cre dentials and therefore has no way to recover your keys if you lose your credentials. References: . 42 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/premiumsupport/knowledge-cen ter/stop-cloudhsm/ https://aws.amazon.com/cloudhsm/faqs/ https://d1.awsstatic.com/whitepapers/Security/secur ity-of-aws-cloudhsm-backups.pdf", "references": "" }, { "question": ": A company deployed a web application that stores st atic assets in an Amazon Simple Storage Service (S3 ) bucket. The Solutions Architect expects the S3 buck et to immediately receive over 2000 PUT requests an d 3500 GET requests per second at peak hour. What should the Solutions Architect do to ensure op timal performance?", "options": [ "A. Do nothing. Amazon S3 will automatically manag e performance at this scale.", "B. Use Byte-Range Fetches to retrieve multiple ra nges of an object data per GET request.", "C. Add a random prefix to the key names.", "D. Use a predictable naming scheme in the key nam es such as sequential numbers or date time" ], "correct": "A. Do nothing. Amazon S3 will automatically manag e performance at this scale.", "explanation": "Explanation Amazon S3 now provides increased performance to sup port at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, whi ch can save significant processing time for no addi tional charge. Each S3 prefix can support these request ra tes, making it simple to increase performance Certkingdom significantly. Applications running on Amazon S3 today will enjoy this performance improvement with no changes, and customers building new applications on S3 do not ha ve to make any application customizations to achiev e this performance. Amazon S3's support for parallel requests means you can scale your S3 performance by the factor of your compute cluster, without making any customizations to your application. Performance scales per prefix, so you can use as many prefixes as you need in parallel to achieve the required throughput. There are no limits to the number of pr efixes. . 43 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam This S3 request rate performance increase removes a ny previous guidance to randomize object prefixes t o achieve faster performance. That means you can now use logical or sequential naming patterns in S3 obj ect naming without any performance implications. This i mprovement is now available in all AWS Regions. Using Byte-Range Fetches to retrieve multiple range s of an object data per GET request is incorrect because although a Byte-Range Fetch helps you achie ve higher aggregate throughput, Amazon S3 does not support retrieving multiple ranges of data per GET request. Using the Range HTTP header in a GET Objec t request, you can fetch a byte-range from an object, transferring only the specified portion. You can u se concurrent connections to Amazon S3 to fetch differ ent byte ranges from within the same object. Fetchi ng smaller ranges of a large object also allows your a pplication to improve retry times when requests are interrupted. Certkingdom Adding a random prefix to the key names is incorrec t. Adding a random prefix is not required in this scenario because S3 can now scale automatically to adjust perfomance. You do not need to add a random prefix anymore for this purpose since S3 has increa sed performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which covers the workload in the scenario. Using a predictable naming scheme in the key names such as sequential numbers or date time sequences is incorrect because Amazon S3 already ma intains an index of object key names in each AWS region. S3 stores key names in alphabetical order. The key name dictates which partition the key is st ored in. Using a sequential prefix increases the likelih ood that Amazon S3 will target a specific partition for a large number of your keys, overwhelming the I/O cap acity of the partition. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/req uest-rate-perf-considerations.html . 44 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://d1.awsstatic.com/whitepapers/AmazonS3BestPr actices.pdf https://docs.aws.amazon.com/AmazonS3/latest/dev/Get tingObjectsUsingAPIs.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A financial company wants to store their data in Am azon S3 but at the same time, they want to store th eir frequently accessed data locally on their on-premis es server. This is due to the fact that they do not have the option to extend their on-premises storage, which i s why they are looking for a durable and scalable s torage service to use in AWS. What is the best solution fo r this scenario?", "options": [ "A. Use the Amazon Storage Gateway - Cached Volume s.", "B. Use both Elasticache and S3 for frequently acc essed data.", "C. Use Amazon Glacier.", "D. Use a fleet of EC2 instance with EBS volumes t o store the commonly used data.", "A. Upload the data to S3 then use a lifecycle pol icy to transfer data to S3 One Zone-IA.", "B. Upload the data to Amazon FSx for Windows File Server using the Server Message Block", "C. Upload the data to S3 then use a lifecycle pol icy to transfer data to S3-IA.", "D. Upload the data to S3 and set a lifecycle poli cy to transition data to Glacier after 0 days." ], "correct": "D. Upload the data to S3 and set a lifecycle poli cy to transition data to Glacier after 0 days.", "explanation": "Explanation Glacier is a cost-effective archival solution for l arge amounts of data. Bulk retrievals are S3 Glacie r's lowest-cost retrieval option, enabling you to retri eve large amounts, even petabytes, of data inexpens ively in a day. Bulk retrievals typically complete within 5 12 hours. You can specify an absolute or relati ve time period (including 0 days) after which the spec ified Amazon S3 objects should be transitioned to Amazon Glacier. Certkingdom Hence, the correct answer is the option that says: Upload the data to S3 and set a lifecycle policy to transition data to Glacier after 0 days. Glacier has a management console that you can use t o create and delete vaults. However, you cannot directly upload archives to Glacier by using the ma nagement console. To upload data such as photos, videos, and other documents, you must either use th e AWS CLI or write code to make requests by using either the REST API directly or by using the AWS SD Ks. Take note that uploading data to the S3 Console and setting its storage class of \"Glacier\" is a differ ent story as the proper way to upload data to Glacier is stil l via its API or CLI. In this way, you can set up y our vaults and configure your retrieval options. If you uploaded your data using the S3 console then it wi ll be managed via S3 even though it is internally using a Glacier storage class. . 46 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Uploading the data to S3 then using a lifecycle pol icy to transfer data to S3-IA is incorrect because using Glacier would be a more cost-effective soluti on than using S3-IA. Since the required retrieval p eriod should not exceed more than a day, Glacier would be the best choice. Uploading the data to Amazon FSx for Windows File S erver using the Server Message Block (SMB) protocol is incorrect because this option costs mor e than Amazon Glacier, which is more suitable for storing infrequently accessed data. Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessi ble over the industry-standard Server Message Block (SMB) protocol. Uploading the data to S3 then using a lifecycle pol icy to transfer data to S3 One Zone-IA is incorrect because with S3 One Zone-IA, the data will only be stored in a single availability zone and thus, this storage solution is not durable. It also costs more compared to Glacier. References: https://aws.amazon.com/glacier/faqs/ https://docs.aws.amazon.com/AmazonS3/latest/dev/obj ect-lifecycle-mgmt.html https://docs.aws.amazon.com/amazonglacier/latest/de v/uploading-an-archive.html Amazon S3 and S3 Glacier Overview: https://www.youtube.com/watch?v=1ymyeN2tki4 Check out this Amazon S3 Glacier Cheat Sheet: https://tutorialsdojo.com/amazon-glacier/ Certkingdom", "references": "https://aws.amazon.com/storagegateway/faqs/ . 45 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/ QUESTION 26 : A company has 10 TB of infrequently accessed financ ial data files that would need to be stored in AWS. These data would be accessed infrequently during sp ecific weeks when they are retrieved for auditing purposes. The retrieval time is not strict as long as it does not exceed 24 hours. Which of the following would be a secure, durable, and cost-effective solution for this scenario?" }, { "question": "A company has an On-Demand EC2 instance with an att ached EBS volume. There is a scheduled job that creates a snapshot of this EBS volume every midnigh t at 12 AM when the instance is not used. One night , there has been a production incident where you need to perform a c hange on both the instance and on the EBS volume at the same time when the snapshot is currently taking place. Which of the following scenario is true when it com es to the usage of an EBS volume while the snapshot is in progress?", "options": [ "A. The EBS volume can be used in read-only mode w hile the snapshot is in progress.", "B. The EBS volume cannot be used until the snapsh ot completes.", "C. The EBS volume cannot be detached or attached to an EC2 instance until the snapshot", "D. The EBS volume can be used while the snapshot is in progress." ], "correct": "D. The EBS volume can be used while the snapshot is in progress.", "explanation": "Explanation certkingdom. Snapshots occur asynchronously; the point-in-time s napshot is created immediately, but the status of t he snapshot is pending until the snapshot is complete (when all of the modified blocks have been transfer red to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where man y blocks have changed. certkingdom. certkingdom. certkingdom. Certkingdom certkingdom. certkingdom. certkingdom. certkingdom. . 48 of 128 certkingdom. certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the vol ume hence, you can still use the EBS volume normally. When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapsho t. The replicated volume loads data lazily in the background so that you can begin using it immediate ly. If you access data that hasn't been loaded yet, the volume immediately downloads the requested data fro m Amazon S3, and then continues loading the rest of the volume's data in the background. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-creating-snapshot.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSSnapshots.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/", "references": "" }, { "question": ": In a startup company you are working for, you are a sked to design a web application that requires a No SQL database that has no limit on the storage size for a given table. The startup is still new in the mark et and it has very limited human resources who can take care of the database infrastructure. Which is the most suitable service that you can imp lement that provides a fully managed, scalable and highly available NoSQL service? Certkingdom", "options": [ "A. SimpleDB", "B. Amazon Neptune", "C. DynamoDB", "D. Amazon Aurora" ], "correct": "C. DynamoDB", "explanation": "Explanation The term \"fully managed\" means that Amazon will man age the underlying infrastructure of the service hence, you don't need an additional human resource to support or maintain the service. Therefore, Amaz on DynamoDB is the right answer. Remember that Amazon RDS is a managed service but not \"fully managed\" as you still have the option to maintain a nd configure the underlying server of the database. . 49 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon DynamoDB is a fast and flexible NoSQL databa se service for all applications that need consisten t, single-digit millisecond latency at any scale. It i s a fully managed cloud database and supports both document and key-value store models. Its flexible d ata model, reliable performance, and automatic scal ing of throughput capacity make it a great fit for mobi le, web, gaming, ad tech, IoT, and many other applications. Amazon Neptune is incorrect because this is primari ly used as a graph database. Amazon Aurora is incorrect because this is a relati onal database and not a NoSQL database. certkingdom. SimpleDB is incorrect. Although SimpleDB is also a highly available and scalable NoSQL database, it ha s a limit on the request capacity or storage size for a given table, unlike DynamoDB.", "references": "https://aws.amazon.com/dynamodb/ certkingdom. Check out this Amazon DynamoDB Cheat Sheet: https://tutorialsdojo.com/amazon-dynamodb/ Amazon DynamoDB Overview: https://www.youtube.com/watch?v=3ZOyUNIeorU certkingdom." }, { "question": ": A leading e-commerce company is in need of a storag e solution that can be simultaneously accessed by Certkingdom 1000 Linux servers in multiple availability zones. The servers are hosted in EC2 instances that use a hierarchical directory structure via the NFSv4 prot ocol. The service should be able to handle the rapi dly changing data at scale while still maintaining high performance. It should also be highly durable and highly certkingdom. available whenever the servers will pull data from it, with little need for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement?", "options": [ "A. EFS", "B. S3", "C. EBS", "D. Storage Gateway Correct Answer: A" ], "correct": "", "explanation": "Explanation Amazon Web Services (AWS) offers cloud storage serv ices to support a wide range of storage workloads such as EFS, S3 and EBS. You have to understand whe n you should use Amazon EFS, Amazon S3 and Amazon Elastic Block Store (EBS) based on the speci fic workloads. In this scenario, the keywords are rapidly changing data and 1000 Linux servers. Amazon EFS is a file storage service for use with A mazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as st rong consistency and file locking), and concurrentl y- certkingdom. accessible storage for up to thousands of Amazon EC 2 instances. EFS provides the same level of high availability and high scalability like S3 however, this service is more suitable for scenarios where i t is required to have a POSIX-compatible file system or if you are storing rapidly changing data. certkingdom. certkingdom. Data that must be updated very frequently might be better served by storage solutions that take into a ccount read and write latencies, such as Amazon EBS volume s, Amazon RDS, Amazon DynamoDB, Amazon EFS, or relational databases running on Amazon EC2. Certkingdom Amazon EBS is a block-level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-l atency access to data from a single EC2 instance. certkingdom. Amazon S3 is an object storage service. Amazon S3 m akes data available through an Internet API that ca n be accessed anywhere. In this scenario, EFS is the best answer. As stated above, Amazon EFS provides a file system interface , file system access semantics (such as strong consistency and file locking), and concurrently-accessible sto rage for up to thousands of Amazon EC2 instances. EFS pr ovides the performance, durability, high availability, and storage capacity needed by the 10 00 Linux servers in the scenario. certkingdom. S3 is incorrect because although this provides the same level of high availability and high scalabilit y like EFS, this service is not suitable for storing data which are rapidly changing, just as mentioned in th e above . 51 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam explanation. It is still more effective to use EFS as it offers strong consistency and file locking wh ich the S3 service lacks. EBS is incorrect because an EBS Volume cannot be sh ared by multiple instances. Storage Gateway is incorrect because this is primar ily used to extend the storage of your on-premises data center to your AWS Cloud. References: https://docs.aws.amazon.com/efs/latest/ug/how-it-wo rks.html certkingdom. https://aws.amazon.com/efs/features/ https://d1.awsstatic.com/whitepapers/AWS%20Storage% 20Services%20Whitepaper-v9.pdf#page=9 Check out this Amazon EFS Cheat Sheet: certkingdom. https://tutorialsdojo.com/amazon-efs/", "references": "" }, { "question": ": A company has an application hosted in an Amazon EC S Cluster behind an Application Load Balancer. The Solutions Architect is building a sophisticated web filtering solution that allows or blocks web r equests based on the country that the requests originate fr om. However, the solution should still allow specif ic IP addresses from that country. certkingdom. Which combination of steps should the Architect imp lement to satisfy this requirement? (Select TWO.) Certkingdom", "options": [ "A. In the Application Load Balancer, create a lis tener rule that explicitly allows requests from", "B. Add another rule in the AWS WAF web ACL with a geo match condition that blocks", "C. Place a Transit Gateway in front of the VPC wh ere the application is hosted and set up", "D. Using AWS WAF, create a web ACL with a rule th at explicitly allows requests from" ], "correct": "", "explanation": "Explanation If you want to allow or block web requests based on the country that the requests originate from, crea te one or more geo match conditions. A geo match condition lists countries that your requests originate from. Later in the process, when you create a web ACL, yo u specify whether to allow or block requests from those countries. You can use geo match conditions with other AWS WAF Classic conditions or rules to build sophisticated filtering. For example, if you want to block certai n countries but still allow specific IP addresses f rom that country, you could create a rule containing a geo m atch condition and an IP match condition. Configure the rule to block requests that originate from that cou ntry and do not match the approved IP addresses. As certkingdom. another example, if you want to prioritize resource s for users in a particular country, you could incl ude a geo match condition in two different rate-based rul es. Set a higher rate limit for users in the prefer red country and set a lower rate limit for all other us ers. certkingdom. certkingdom. Certkingdom certkingdom. certkingdom. If you are using the CloudFront geo restriction fea ture to block a country from accessing your content , any request from that country is blocked and is not for warded to AWS WAF Classic. So if you want to allow or . 53 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam block requests based on geography plus other AWS WA F Classic conditions, you should not use the CloudFront geo restriction feature. Instead, you sh ould use an AWS WAF Classic geo match condition. Hence, the correct answers are: - Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set. - Add another rule in the AWS WAF web ACL with a ge o match condition that blocks requests that originate from a specific country. The option that says: In the Application Load Balan cer, create a listener rule that explicitly allows certkingdom. requests from approved IP addresses is incorrect be cause a listener rule just checks for connection requests using the protocol and port that you confi gure. It only determines how the load balancer rout es the requests to its registered targets. The option that says: Set up a geo match condition in the Application Load Balancer that block requests that originate from a specific country is incorrect because you can't configure a geo match condition in an Application Load Balancer. You have to use AWS WAF instead. certkingdom. The option that says: Place a Transit Gateway in fr ont of the VPC where the application is hosted and set up Network ACLs that block requests that origin ate from a specific country is incorrect because AWS Transit Gateway is simply a service that enable s customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a s ingle gateway. Using this type of gateway is not warranted in this scenario. Moreover, Network ACLs are not suitable for blocking requests from a speci fic country. You have to use AWS WAF instead. certkingdom. References: Certkingdom https://docs.aws.amazon.com/waf/latest/developergui de/classic-web-acl-geo-conditions.html https://docs.aws.amazon.com/waf/latest/developergui de/how-aws-waf-works.html Check out this AWS WAF Cheat Sheet: certkingdom. https://tutorialsdojo.com/aws-waf/ AWS Security Services Overview - WAF, Shield, Cloud HSM, KMS: https://www.youtube.com/watch?v=-1S-RdeAmMo", "references": "" }, { "question": ": certkingdom. . 54 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A company plans to migrate a MySQL database from an on-premises data center to the AWS Cloud. This database will be used by a legacy batch application that has steady-state workloads in the morning but has its peak load at night for the end-of-day processin g. You need to choose an EBS volume that can handle a maximum of 450 GB of data and can also be used as t he system boot volume for your EC2 instance. Which of the following is the most cost-effective s torage type to use in this scenario?", "options": [ "A. Amazon EBS Throughput Optimized HDD (st1) B. Amazon EBS Provisioned IOPS SSD (io1)", "C. Amazon EBS General Purpose SSD (gp2)", "D. Amazon EBS Cold HDD (sc1)" ], "correct": "C. Amazon EBS General Purpose SSD (gp2)", "explanation": "Explanation In this scenario, a legacy batch application which has steady-state workloads requires a relational My SQL database. The EBS volume that you should use has to handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 inst ance. Since HDD volumes cannot be used as a bootable volume, we can narrow down our options by selecting SSD volumes. In addition, SSD volumes are more suitable for transactional database worklo ads, as shown in the table below: Certkingdom . 55 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam General Purpose SSD (gp2) volumes offer cost-effect ive storage that is ideal for a broad range of workloads. These volumes deliver single-digit milli second latencies and the ability to burst to 3,000 IOPS Certkingdom for extended periods of time. AWS designs gp2 volum es to deliver the provisioned performance 99% of th e time. A gp2 volume can range in size from 1 GiB to 16 TiB. Amazon EBS Provisioned IOPS SSD (io1) is incorrect because this is not the most cost-effective EBS type and is primarily used for critical business ap plications that require sustained IOPS performance. Amazon EBS Throughput Optimized HDD (st1) is incorr ect because this is primarily used for frequently accessed, throughput-intensive workloads. Although it is a low-cost HDD volume, it cannot be used as a system boot volume. Amazon EBS Cold HDD (sc1) is incorrect. Although Am azon EBS Cold HDD provides lower cost HDD volume compared to General Purpose SSD, it cannot b e used as a system boot volume.", "references": ". 56 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_gp2 Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/" }, { "question": ": A loan processing application is hosted in a single On-Demand EC2 instance in your VPC. To improve the scalability of your application, you have to use Au to Scaling to automatically add new EC2 instances t o handle a surge of incoming requests. Which of the following items should be done in orde r to add an existing EC2 instance to an Auto Scalin g group? (Select TWO.)", "options": [ "A. You have to ensure that the instance is launch ed in one of the Availability Zones defined in", "B. You must stop the instance first.", "C. You have to ensure that the AMI used to launch the instance still exists.", "D. You have to ensure that the instance is in a d ifferent Availability Zone as the Auto Scaling" ], "correct": "", "explanation": "Explanation Amazon EC2 Auto Scaling provides you with an option to enable automatic scaling for one or more EC2 instances by attaching them to your existing Auto S caling group. After the instances are attached, the y become a part of the Auto Scaling group. . 57 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The instance that you want to attach must meet the following criteria: - The instance is in the running state. - The AMI used to launch the instance must still ex ist. - The instance is not a member of another Auto Scal ing group. - The instance is launched into one of the Availabi lity Zones defined in your Auto Scaling group. - If the Auto Scaling group has an attached load ba lancer, the instance and the load balancer must bot h be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance a nd the load balancer must both be in the same VPC. Based on the above criteria, the following are the correct answers among the given options: - You have to ensure that the AMI used to launch th e instance still exists. Certkingdom - You have to ensure that the instance is launched in one of the Availability Zones defined in your Auto Scaling group. The option that says: You must stop the instance fi rst is incorrect because you can directly add a run ning EC2 instance to an Auto Scaling group without stopp ing it. The option that says: You have to ensure that the A MI used to launch the instance no longer exists is incorrect because it should be the other way around . The AMI used to launch the instance should still exist. The option that says: You have to ensure that the i nstance is in a different Availability Zone as the Auto Scaling group is incorrect because the instanc e should be launched in one of the Availability Zon es defined in your Auto Scaling group. References: . 58 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam http://docs.aws.amazon.com/autoscaling/latest/userg uide/attach-instance-asg.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/scaling_plan.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ":An e-commerce application is using a fanout messagi ng pattern for its order management system. For eve ry order, it sends an Amazon SNS message to an SNS top ic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronou s processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the messa ge. There was an incident that while an EC2 instance is currently processing a message, the ins tance was abruptly terminated, and the processing w as not completed in time. In this scenario, what happens to the SQS message?", "options": [ "A. The message will be sent to a Dead Letter Queu e in AWS DataSync.", "B. The message is deleted and becomes duplicated in the SQS when the EC2 instance comes", "C. When the message visibility timeout expires, t he message becomes available for processing", "D. The message will automatically be assigned to the same EC2 instance when it comes back" ], "correct": "C. When the message visibility timeout expires, t he message becomes available for processing", "explanation": "Explanation A \"fanout\" pattern is when an Amazon SNS message is sent to a topic and then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or emai l addresses. This allows for parallel asynchronous processing. For example, you could develop an appli cation that sends an Amazon SNS message to a topic whenever an order is placed for a product. Then, th e Amazon SQS queues that are subscribed to that top ic would receive identical notifications for the new o rder. The Amazon EC2 server instance attached to on e of the queues could handle the processing or fulfillme nt of the order, while the other server instance co uld be attached to a data warehouse for analysis of all or ders received. . 59 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam When a consumer receives and processes a message fr om a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message . Because Amazon SQS is a distributed system, there's no guarantee that the consumer actually rec eives the message (for example, due to a connectivi ty issue, or due to an issue in the consumer applicati on). Thus, the consumer must delete the message fro m the certkingdom. queue after receiving and processing it. Immediately after the message is received, it remai ns in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a vis ibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours. The option that says: The message will automaticall y be assigned to the same EC2 instance when it certkingdom. comes back online within or after the visibility ti meout is incorrect because the message will not be automatically assigned to the same EC2 instance onc e it is abruptly terminated. When the message visibility timeout expires, the message becomes ava ilable for processing by other EC2 instances. The option that says: The message is deleted and be comes duplicated in the SQS when the EC2 instance comes online is incorrect because the message will not be deleted and won't be duplicated in the SQS queue when the EC2 instance comes online. Certkingdom certkingdom. The option that says: The message will be sent to a Dead Letter Queue in AWS DataSync is incorrect because although the message could be programmatica lly sent to a Dead Letter Queue (DLQ), it won't be handled by AWS DataSync but by Amazon SQS instead. AWS DataSync is primarily used to simplify your migration with AWS. It makes it simple and fast to move large amounts of data online between on- premises storage and Amazon S3 or Amazon Elastic Fi le System (Amazon EFS). certkingdom. References: http://docs.aws.amazon.com/AWSSimpleQueueService/la test/SQSDeveloperGuide/sqs-visibility- timeout.html https://docs.aws.amazon.com/sns/latest/dg/sns-commo n-scenarios.html Check out this Amazon SQS Cheat Sheet: certkingdom. . 60 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": A company needs to use Amazon S3 to store irreprodu cible financial documents. For their quarterly reporting, the files are required to be retrieved a fter a period of 3 months. There will be some occas ions when a surprise audit will be held, which requires access to the archived data that they need to prese nt immediately. What will you do to satisfy this requirement in a c ost-effective way? A. Use Amazon S3 Standard", "options": [ "B. Use Amazon S3 Standard - Infrequent Access", "C. Use Amazon S3 -Intelligent Tiering", "D. Use Amazon Glacier Deep Archive" ], "correct": "B. Use Amazon S3 Standard - Infrequent Access", "explanation": "Explanation In this scenario, the requirement is to have a stor age option that is cost-effective and has the abili ty to certkingdom. access or retrieve the archived data immediately. T he cost-effective options are Amazon Glacier Deep Archive and Amazon S3 Standard- Infrequent Access ( Standard - IA). However, the former option is not designed for rapid retrieval of data which is requi red for the surprise audit. Hence, using Amazon Glacier Deep Archive is incorre ct and the best answer is to use Amazon S3 Standard - Infrequent Access. Certkingdom certkingdom. certkingdom. certkingdom. . 61 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Using Amazon S3 Standard is incorrect because the s tandard storage class is not cost-efficient in this scenario. It costs more than Glacier Deep Archive a nd S3 Standard - Infrequent Access. Certkingdom Using Amazon S3 -Intelligent Tiering is incorrect b ecause the Intelligent Tiering storage class entail s an additional fee for monitoring and automation of eac h object in your S3 bucket vs. the Standard storage class and S3 Standard - Infrequent Access. Amazon S3 Standard - Infrequent Access is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed. S tandard - IA offers the high durability, throughput , and low latency of Amazon S3 Standard, with a low per G B storage price and per GB retrieval fee. This combination of low cost and high performance m akes Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the obje ct level and can exist in the same bucket as Standard, allow ing you to use lifecycle policies to automatically transition objects between storage classes without any application changes. References: . 62 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/s3/storage-classes/ https://aws.amazon.com/s3/faqs/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ S3 Standard vs S3 Standard-IA vs S3 One Zone IA vs S3 Intelligent Tiering: https://tutorialsdojo.com/s3-standard-vs-s3-standar d-ia-vs-s3-one-zone-ia/", "references": "" }, { "question": ": A company has a running m5ad.large EC2 instance wit h a default attached 75 GB SSD instance-store certkingdom. backed volume. You shut it down and then start the instance. You noticed that the data which you have saved earlier on the attached volume is no longer a vailable. What might be the cause of this?", "options": [ "A. The EC2 instance was using EBS backed root vol umes, which are ephemeral and only live", "B. The EC2 instance was using instance store volu mes, which are ephemeral and only live for", "C. The volume of the instance was not big enough to handle all of the processing data.", "D. The instance was hit by a virus that wipes out all data." ], "correct": "B. The EC2 instance was using instance store volu mes, which are ephemeral and only live for", "explanation": "Explanation certkingdom. An instance store provides temporary block-level st orage for your instance. This storage is located on disks that are physically attached to the host comp uter. Instance store is ideal for temporary storage of information that changes frequently, such as buffer s, caches, scratch data, and other temporary conten t, or for data that is replicated across a fleet of insta nces, such as a load-balanced pool of web servers. certkingdom. . 63 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. An instance store consists of one or more instance store volumes exposed as block devices. The size of an certkingdom. instance store as well as the number of devices ava ilable varies by instance type. While an instance s tore is dedicated to a particular instance, the disk subsys tem is shared among instances on a host computer. The data in an instance store persists only during the lifetime of its associated instance. If an inst ance Certkingdom reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances: certkingdom. - The underlying disk drive fails - The instance stops - The instance terminates", "references": "certkingdom. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ InstanceStorage.html Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE . 64 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" }, { "question": ": A company has several microservices that send messa ges to an Amazon SQS queue and a backend application that poll the queue to process the mess ages. The company also has a Service Level Agreemen t (SLA) which defines the acceptable amount of time t hat can elapse from the point when the messages are received until a response is sent. The backend oper ations are I/O-intensive as the number of messages is constantly growing, causing the company to miss its SLA. The Solutions Architect must implement a new architecture that improves the application's proces sing time and load management. Which of the following is the MOST effective soluti on that can satisfy the given requirement?", "options": [ "A. Create an AMI of the backend application's EC2 instance and launch it to a cluster", "B. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto", "C. Create an AMI of the backend application's EC2 instance and replace it with a larger", "D. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto" ], "correct": "D. Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto", "explanation": "Explanation certkingdom. Amazon Simple Queue Service (SQS) is a fully manage d message queuing service that enables you to decouple and scale microservices, distributed syste ms, and serverless applications. SQS eliminates the complexity and overhead associated with managing an d operating message-oriented middleware and empowers developers to focus on differentiating wor k. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other service s to be available. certkingdom. . 65 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The ApproximateAgeOfOldestMessage metric is useful when applications have time-sensitive messages and you need to ensure that messages are processed within a specific time period. You can use this met ric to set Amazon CloudWatch alarms that issue alerts w hen messages remain in the queue for extended periods of time. You can also use alerts to take ac tion, such as increasing the number of consumers to process messages more quickly. With a target tracking scaling policy, you can scal e (increase or decrease capacity) a resource based on a certkingdom. target value for a specific CloudWatch metric. To c reate a custom metric for this policy, you need to use AWS CLI or AWS SDKs. Take note that you need to cre ate an AMI from the instance first before you can create an Auto Scaling group to scale the instances based on the ApproximateAgeOfOldestMessage metric. Hence, the correct answer is: Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling Group and configure a tar get tracking scaling policy based on the ApproximateAgeOfOldestMessage metric. certkingdom. The option that says: Create an AMI of the backend application's EC2 instance. Use the image to set up an Auto Scaling Group and configure a target tra cking scaling policy based on the CPUUtilization Certkingdom metric with a target value of 80% is incorrect. Alt hough this will improve the backend processing, the scaling policy based on the CPUUtilization metric i s not meant for time-sensitive messages where you n eed to ensure that the messages are processed within a specific time period. It will only trigger the scal e-out activities based on the CPU Utilization of the curr ent instances, and not based on the age of the mess age, certkingdom. which is a crucial factor in meeting the SLA. To sa tisfy the requirement in the scenario, you should u se the ApproximateAgeOfOldestMessage metric. The option that says: Create an AMI of the backend application's EC2 instance and replace it with a larger instance size is incorrect because replacing the instance with a large size won't be enough to dynamically handle workloads at any level. You need to implement an Auto Scaling group to automaticall y adjust the capacity of your computing resources. certkingdom. The option that says: Create an AMI of the backend application's EC2 instance and launch it to a cluster placement group is incorrect because a clus ter placement group is just a logical grouping of E C2 instances. Instead of launching the instance in a p lacement group, you must set up an Auto Scaling gro up . 66 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam for your EC2 instances and configure a target track ing scaling policy based on the ApproximateAgeOfOldestMessage metric. References: https://aws.amazon.com/about-aws/whats-new/2016/08/ new-amazon-cloudwatch-metric-for-amazon-sqs- monitors-the-age-of-the-oldest-message/ https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-available- cloudwatch-metrics.html https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": certkingdom. A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database. Which of the following is the most suitable solutio n in this scenario?", "options": [ "A. Use AWS Secrets Manager to generate and store short-lived authentication tokens.", "B. Use an MFA token to access and connect to a da tabase.", "D. Use AWS SSO to access the RDS database." ], "correct": "", "explanation": "Explanation certkingdom. You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. An authentication token is a string of characters t hat you use instead of a password. After you genera te an certkingdom. authentication token, it's valid for 15 minutes bef ore it expires. If you try to connect using an expi red token, the connection request is denied. . 67 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam certkingdom. certkingdom. Certkingdom certkingdom. Since the scenario asks you to create a short-lived authentication token to access an Amazon RDS datab ase, you can use an IAM database authentication when con necting to a database instance. Authentication is handled by AWSAuthenticationPlugin--an AWS-provided plugin that works seamlessly with IAM to authenticate certkingdom. your IAM users. . 68 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam IAM database authentication provides the following benefits: Network traffic to and from the database is encrypt ed using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for gre ater security Hence, the correct answer is the option that says: Use IAM DB Authentication and create database accounts using the AWS-provided AWSAuthenticationPl ugin plugin in MySQL. The options that say: Use AWS SSO to access the RDS database is incorrect because AWS SSO just enables you to centrally manage SSO access and user permissions for all of your AWS accounts managed through AWS Organizations. The option that says: Use AWS Secrets Manager to ge nerate and store short-lived authentication tokens is incorrect because AWS Secrets Manager is not a suitable service to create an authentication token to access an Amazon RDS database. It's primarily us ed to store passwords, secrets, and other sensitive certkingdom. credentials. It can't generate a short-lived token either. You have to use IAM DB Authentication inste ad. The option that says: Use an MFA token to access an d connect to a database is incorrect because you can't use an MFA token to connect to your database. You have to set up IAM DB Authentication instead. References: certkingdom. https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/UsingWithRDS.IAMDBAuth.Connect Certkingdom ing.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.DBAccounts.ht ml Check out this AWS IAM Cheat Sheet: certkingdom. https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company has a web application hosted on a fleet o f EC2 instances located in two Availability Zones t hat are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is h ighly-available. certkingdom. . 69 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Which health checks will you implement?", "options": [ "A. ICMP health check", "B. FTP health check", "C. HTTP or HTTPS health check", "D. TCP health check" ], "correct": "C. HTTP or HTTPS health check", "explanation": "Explanation A load balancer takes requests from clients and dis tributes them across the EC2 instances that are reg istered with the load balancer. You can create a load balan cer that listens to both the HTTP (80) and HTTPS (4 43) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load bal ancer terminates the requests, and communication from the load balancer to the instances is not encrypted. I f the HTTPS listener sends requests to the instances on p ort 443, communication from the load balancer to th e instances is encrypted. If your load balancer uses an encrypted connection to communicate with the instances, you can optional ly Certkingdom enable authentication of the instances. This ensure s that the load balancer communicates with an insta nce only if its public key matches the key that you spe cified to the load balancer for this purpose. The type of ELB that is mentioned in this scenario is an Application Elastic Load Balancer. This is us ed if you want a flexible feature set for your web applic ations with HTTP and HTTPS traffic. Conversely, it only allows 2 types of health check: HTTP and HTTPS. Hence, the correct answer is: HTTP or HTTPS health check. ICMP health check and FTP health check are incorrec t as these are not supported. TCP health check is incorrect. A TCP health check i s only offered in Network Load Balancers and Classi c Load Balancers. References: . 70 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam http://docs.aws.amazon.com/elasticloadbalancing/lat est/classic/elb-healthchecks.html https://docs.aws.amazon.com/elasticloadbalancing/la test/application/introduction.html Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ EC2 Instance Health Check vs ELB Health Check vs Au to Scaling and Custom Health Check: https://tutorialsdojo.com/ec2-instance-health-check -vs-elb-health-check-vs-auto-scaling-and-custom- health-check/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": A startup needs to use a shared file system for its .NET web application running on an Amazon EC2 Windows instance. The file system must provide a hi gh level of throughput and IOPS that can also be certkingdom. integrated with Microsoft Active Directory. certkingdom. Which is the MOST suitable service that you should use to achieve this requirement?", "options": [ "A. Amazon FSx for Windows File Server", "B. AWS Storage Gateway - File Gateway", "C. Amazon EBS Provisioned IOPS SSD volumes", "D. Amazon Elastic File System" ], "correct": "A. Amazon FSx for Windows File Server", "explanation": "Explanation Amazon FSx for Windows File Server provides fully m anaged, highly reliable, and scalable file storage certkingdom. accessible over the industry-standard Service Messa ge Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative f eatures such as user quotas, end-user file restore, and certkingdom. Microsoft Active Directory (AD) integration. Amazon FSx supports the use of Microsoft's Distribu ted File System (DFS) Namespaces to scale-out performance across multiple file systems in the sam e namespace up to tens of Gbps and millions of IOPS . certkingdom. . 71 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The key phrases in this scenario are \"file system\" and \"Active Directory integration.\" You need to implement a solution that will meet these requireme nts. Among the options given, the possible answers are FSx Windows File Server and File Gateway. But you n eed to consider that the question also states that you need to provide a high level of throughput and IOPS . Amazon FSx Windows File Server can scale-out storage to hundreds of petabytes of data with tens of GB/s of throughput performance and millions of I OPS. Hence, the correct answer is: Amazon FSx for Window s File Server. Amazon EBS Provisioned IOPS SSD volumes is incorrec t because this is just a block storage volume and certkingdom. not a full-fledged file system. Amazon EBS is prima rily used as persistent block storage for EC2 insta nces. Amazon Elastic File System is incorrect because it is stated in the scenario that the startup uses an certkingdom. Amazon EC2 Windows instance. Remember that Amazon E FS can only handle Linux workloads. AWS Storage Gateway - File Gateway is incorrect. Al though it can be used as a shared file system for Windows and can also be integrated with Microsoft A ctive Directory, Amazon FSx still has a higher leve l Certkingdom certkingdom. of throughput and IOPS compared with AWS Storage Gateway. Amazon FSX is capable of providing hundreds of thousands certkingdom. (or even millions) of IOPS. References: https://aws.amazon.com/fsx/windows/faqs/ certkingdom. https://docs.aws.amazon.com/fsx/latest/WindowsGuide /what-is.html certkingdom. Check out this Amazon FSx Cheat Sheet: https://tutorialsdojo.com/amazon-fsx/ certkingdom. . 72 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "" }, { "question": ": A company plans to implement a hybrid architecture. They need to create a dedicated connection from th eir Amazon Virtual Private Cloud (VPC) to their on-prem ises network. The connection must provide high bandwidth throughput and a more consistent network experience than Internet-based solutions. Which of the following can be used to create a priv ate connection between the VPC and the company's on - premises network?", "options": [ "A. Transit VPC", "B. AWS Site-to-Site VPN", "C. AWS Direct Connect", "D. Transit Gateway with equal-cost multipath rout ing (ECMP)" ], "correct": "C. AWS Direct Connect", "explanation": "Explanation AWS Direct Connect links your internal network to a n AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Dire ct Connect router. Certkingdom . 73 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam With this connection, you can create virtual interf aces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet ser vice providers in your network path. An AWS Direct Connect location provides access to AWS in t he region with which it is associated. You can use a single connection in a public Region or AWS GovClou d (US) to access public AWS services in all other public Regions Hence, the correct answer is: AWS Direct Connect. The option that says: Transit VPC is incorrect beca use this in itself is not enough to integrate your on- premises network to your VPC. You have to either us e a VPN or a Direct Connect connection. A transit VPC is primarily used to connect multiple VPCs and remote networks in order to create a global network transit center and not for establishing a dedicated connection to your on-premises network. The option that says: Transit Gateway with equal-co st multipath routing (ECMP) is incorrect because a transit gateway is commonly used to connect multipl e VPCs and on-premises networks through a central hub. Just like transit VPC, a transit gateway is no t capable of establishing a direct and dedicated co nnection to your on-premises network. The option that says: AWS Site-to-Site VPN is incor rect because this type of connection traverses the public Internet. Moreover, it doesn't provide a hig h bandwidth throughput and a more consistent networ k experience than Internet-based solutions. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/connect-vpc/ https://docs.aws.amazon.com/directconnect/latest/Us erGuide/Welcome.html Certkingdom Check out this AWS Direct Connect Cheat Sheet: https://tutorialsdojo.com/aws-direct-connect/ S3 Transfer Acceleration vs Direct Connect vs VPN v s Snowball vs Snowmobile: https://tutorialsdojo.com/s3-transfer-acceleration- vs-direct-connect-vs-vpn-vs-snowball-vs-snowmobile/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": . 74 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A startup launched a fleet of on-demand EC2 instanc es to host a massively multiplayer online role-play ing game (MMORPG). The EC2 instances are configured wit h Auto Scaling and AWS Systems Manager. What can be used to configure the EC2 instances wit hout having to establish an RDP or SSH connection t o each instance?", "options": [ "A. EC2Config", "B. AWS Config", "C. Run Command", "D. AWS CodePipeline" ], "correct": "C. Run Command", "explanation": "Explanation You can use Run Command from the console to configu re instances without having to login to each instance. certkingdom. certkingdom. Certkingdom certkingdom. AWS Systems Manager Run Command lets you remotely a nd securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your certkingdom. hybrid environment that has been configured for Sys tems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command L ine Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. certkingdom. Hence, the correct answer is: Run Command. certkingdom.", "references": "https://docs.aws.amazon.com/systems-manager/latest/ userguide/execute-remote-commands.html certkingdom. . 75 of 128 certkingdom. Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS Systems Manager Overview: https://www.youtube.com/watch?v=KVFKyMAHxqY Check out this AWS Systems Manager Cheat Sheet: https://tutorialsdojo.com/aws-systems-manager/" }, { "question": ": A company has a UAT and production EC2 instances ru nning on AWS. They want to ensure that employees who are responsible for the UAT instances don't have the access to work on the production instances to minimize security risks. Which of the following would be the best way to ach ieve this?", "options": [ "A. Define the tags on the UAT and production serv ers and add a condition to the IAM policy", "B. Launch the UAT and production instances in dif ferent Availability Zones and use Multi", "D. Provide permissions to the users via the AWS R esource Access Manager (RAM) service to" ], "correct": "A. Define the tags on the UAT and production serv ers and add a condition to the IAM policy", "explanation": "Explanation Certkingdom For this scenario, the best way to achieve the requ ired solution is to use a combination of Tags and I AM policies. You can define the tags on the UAT and pr oduction EC2 instances and add a condition to the I AM policy which allows access to specific tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many reso urces of the same type -- you can quickly identify a specific resource based on the tags you've assigned to it. . 76 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam By default, IAM users don't have permission to crea te or modify Amazon EC2 resources, or perform tasks using the Amazon EC2 API. (This means that they als o can't do so using the Amazon EC2 console or CLI.) To allow IAM users to create or modify resources an d perform tasks, you must create IAM policies that grant IAM users permission to use the specific reso urces and API actions they'll need, and then attach those policies to the IAM users or groups that require th ose permissions. Hence, the correct answer is: Define the tags on th e UAT and production servers and add a condition to the IAM policy which allows access to specific t ags. The option that says: Launch the UAT and production EC2 instances in separate VPC's connected by VPC peering is incorrect because these are just net work changes to your cloud architecture and don't h ave any effect on the security permissions of your user s to access your EC2 instances. Certkingdom The option that says: Provide permissions to the us ers via the AWS Resource Access Manager (RAM) service to only access EC2 instances that are used for production or development is incorrect because the AWS Resource Access Manager (RAM) is primarily used to securely share your resources across AWS accounts or within your Organization and not on a s ingle AWS account. You also have to set up a custom IAM Policy in order for this to work. The option that says: Launch the UAT and production instances in different Availability Zones and use Multi Factor Authentication is incorrect becaus e placing the EC2 instances to different AZs will o nly improve the availability of the systems but won't h ave any significance in terms of security. You have to set up an IAM Policy that allows access to EC2 instance s based on their tags. In addition, a Multi-Factor Authentication is not a suitable security feature t o be implemented for this scenario. References: . 77 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ Using_Tags.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /iam-policies-for-amazon-ec2.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": An investment bank has a distributed batch processi ng application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client wi ll be buffered first and then sent as a batch reque st to SQS. What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?", "options": [ "A. Processing Timeout", "B. Receiving Timeout", "C. Component Timeout", "D. Visibility Timeout" ], "correct": "D. Visibility Timeout", "explanation": "Explanation The visibility timeout is a period of time during w hich Amazon SQS prevents other consuming components from receiving and processing a message. Certkingdom When a consumer receives and processes a message fr om a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message . Because Amazon SQS is a distributed system, there's no guarantee that the consumer actually rec eives the message (for example, due to a connectivi ty issue, or due to an issue in the consumer applicati on). Thus, the consumer must delete the message fro m the queue after receiving and processing it. Immediately after the message is received, it remai ns in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a vis ibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours. References: https://aws.amazon.com/sqs/faqs/ . 78 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-visibility- timeout.html Check out this Amazon SQS Cheat Sheet: https://tutorialsdojo.com/amazon-sqs/", "references": "" }, { "question": ": An organization created a new CloudFormation templa te that creates 4 EC2 instances that are connected to one Elastic Load Balancer (ELB). Which section of t he template should be configured to get the Domain Name Server hostname of the ELB upon the creation o f the AWS stack?", "options": [ "A. Resources", "B. Parameters", "C. Mappings", "D. Outputs" ], "correct": "D. Outputs", "explanation": "Explanation certkingdom. Outputs is an optional section of the CloudFormatio n template that describes the values that are retur ned whenever you view your stack's properties. Certkingdom certkingdom.", "references": "certkingdom. https://docs.aws.amazon.com/AWSCloudFormation/lates t/UserGuide/template-anatomy.html https://aws.amazon.com/cloudformation/ Check out this AWS CloudFormation Cheat Sheet: certkingdom. . 79 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY" }, { "question": ": A company is planning to launch a High Performance Computing (HPC) cluster in AWS that does Computational Fluid Dynamics (CFD) simulations. The solution should scale-out their simulation jobs to experiment with more tunable parameters for faster and more accurate results. The cluster is composed of Windows servers hosted on t3a.medium EC2 instances. As the Solutions Architect, you should ensure that the architecture provides higher bandwidth, higher packet per second (PPS) performance, and consistent ly lower inter-instance latencies. Which is the MOST suitable and cost-effective solut ion that the Architect should implement to achieve the above requirements?", "options": [ "A. Use AWS ParallelCluster to deploy and manage t he HPC cluster to provide higher", "B. Enable Enhanced Networking with Intel 82599 Vi rtual Function (VF) interface on the", "C. Enable Enhanced Networking with Elastic Fabric Adapter (EFA) on the Windows EC2", "D. Enable Enhanced Networking with Elastic Networ k Adapter (ENA) on the Windows EC2" ], "correct": "D. Enable Enhanced Networking with Elastic Networ k Adapter (ENA) on the Windows EC2", "explanation": "Explanation Enhanced networking uses single root I/O virtualiza tion (SR-IOV) to provide high-performance networkin g capabilities on supported instance types. SR-IOV is a method of device virtualization that provides hi gher I/O performance and lower CPU utilization when comp ared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, high er packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networki ng. . 80 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon EC2 provides enhanced networking capabilitie s through the Elastic Network Adapter (ENA). It supports network speeds of up to 100 Gbps for suppo rted instance types. Elastic Network Adapters (ENAs ) provide traditional IP networking features that are required to support VPC networking. An Elastic Fabric Adapter (EFA) is simply an Elasti c Network Adapter (ENA) with added capabilities. It provides all of the functionality of an ENA, with a dditional OS-bypass functionality. OS-bypass is an access model that allows HPC and machine learning a pplications to communicate directly with the networ k interface hardware to provide low-latency, reliable transport functionality. The OS-bypass capabilities of EFAs are not supporte d on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elas tic Network Adapter, without the added EFA capabilities. Hence, the correct answer is to enable Enhanced Net working with Elastic Network Adapter (ENA) on the Windows EC2 Instances. Enabling Enhanced Networking with Elastic Fabric Ad apter (EFA) on the Windows EC2 Instances is incorrect because the OS-bypass capabilities of the Elastic Fabric Adapter (EFA) are not supported on Windows instances. Although you can attach EFA to y our Windows instances, this will just act as a regu lar Certkingdom Elastic Network Adapter, without the added EFA capa bilities. Moreover, it doesn't support the t3a.medi um instance type that is being used in the HPC cluster . Enabling Enhanced Networking with Intel 82599 Virtu al Function (VF) interface on the Windows EC2 Instances is incorrect because although you can attach an Intel 82599 Virtual Function (VF) interf ace on your Windows EC2 Instances to improve its networ king capabilities, it doesn't support the t3a.mediu m instance type that is being used in the HPC cluster . Using AWS ParallelCluster to deploy and manage the HPC cluster to provide higher bandwidth, higher packet per second (PPS) performance, and low er inter-instance latencies is incorrect because an AWS ParallelCluster is just an AWS-supported ope n-source cluster management tool that makes it easy for you to deploy and manage High Performance Compu ting (HPC) clusters on AWS. It does not provide higher bandwidth, higher packet per second (PPS) pe rformance, and lower inter-instance latencies, unli ke ENA or EFA. . 81 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /enhanced-networking.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /efa.html", "references": "" }, { "question": ": A Solutions Architect needs to ensure that all of t he AWS resources in Amazon VPC don't go beyond thei r respective service limits. The Architect should pre pare a system that provides real-time guidance in provisioning resources that adheres to the AWS best practices. Which of the following is the MOST appropriate serv ice to use to satisfy this task?", "options": [ "A. Amazon Inspector", "B. AWS Trusted Advisor", "C. AWS Cost Explorer", "D. AWS Budgets" ], "correct": "B. AWS Trusted Advisor", "explanation": "Explanation AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps. Certkingdom Whether establishing new workflows, developing appl ications, or as part of ongoing improvement, take advantage of the recommendations provided by Truste d Advisor on a regular basis to help keep your solutions provisioned optimally. . 82 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Trusted Advisor includes an ever-expanding list of checks in the following five categories: Cost Optimization recommendations that can potenti ally save you money by highlighting unused resources and opportunities to reduce your bill. Security identification of security settings that could make your AWS solution less secure. Fault Tolerance recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources. Performance recommendations that can help to impro ve the speed and responsiveness of your applications. Certkingdom Service Limits recommendations that will tell you when service usage is more than 80% of the service limit. Hence, the correct answer in this scenario is AWS T rusted Advisor. AWS Cost Explorer is incorrect because this is just a tool that enables you to view and analyze your c osts and usage. You can explore your usage and costs usi ng the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. It has an easy-to-use interface that lets you visualize, und erstand, and manage your AWS costs and usage over time. AWS Budgets is incorrect because it simply gives yo u the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to ex ceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization dr ops below the threshold you define. . 83 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Amazon Inspector is incorrect because it is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities , and deviations from best practices. References: https://aws.amazon.com/premiumsupport/technology/tr usted-advisor/ https://aws.amazon.com/premiumsupport/technology/tr usted-advisor/faqs/ Check out this AWS Trusted Advisor Cheat Sheet: https://tutorialsdojo.com/aws-trusted-advisor/", "references": "" }, { "question": ": A local bank has an in-house application that handl es sensitive financial data in a private subnet. Af ter the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other ser vices. How should you design this solution so that the dat a does not pass through the public Internet? A. Provision a NAT gateway in the private subnet with a corresponding route entry that directs the data to S3.", "options": [ "B. Create an Internet gateway in the public subne t with a corresponding route entry that", "C. Configure a VPC Endpoint along with a correspo nding route entry that directs the data to", "D. Configure a Transit gateway along with a corre sponding route entry that directs the data to" ], "correct": "C. Configure a VPC Endpoint along with a correspo nding route entry that directs the data to", "explanation": "Explanation The important concept that you have to understand i n this scenario is that your VPC and your S3 bucket are located within the larger AWS network. However, the traffic coming from your VPC to your S3 bucket is traversing the public Internet by default. To bette r protect your data in transit, you can set up a VP C endpoint so the incoming traffic from your VPC will not pass through the public Internet, but instead through the private AWS network. A VPC endpoint enables you to privately connect you r VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring a n Internet gateway, NAT device, VPN connection, or . 84 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam AWS Direct Connect connection. Instances in your VP C do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services does not le ave the Amazon network. Endpoints are virtual devices. They are horizontall y scaled, redundant, and highly available VPC components that allow communication between instanc es in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. Certkingdom Hence, the correct answer is: Configure a VPC Endpo int along with a corresponding route entry that directs the data to S3. The option that says: Create an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3 is incorrect beca use the Internet gateway is used for instances in t he public subnet to have accessibility to the Internet . The option that says: Configure a Transit gateway a long with a corresponding route entry that directs the data to S3 is incorrect because the Transit Gat eway is used for interconnecting VPCs and on-premis es networks through a central hub. Since Amazon S3 is outside of VPC, you still won't be able to connect to it privately. The option that says: Provision a NAT gateway in th e private subnet with a corresponding route entry that directs the data to S3 is incorrect because NA T Gateway allows instances in the private subnet to gain access to the Internet, but not vice versa. . 85 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/vpc/latest/userguide/vp c-endpoints.html https://docs.aws.amazon.com/vpc/latest/userguide/vp ce-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": An online shopping platform is hosted on an Auto Sc aling group of On-Demand EC2 instances with a default Auto Scaling termination policy and no inst ance protection configured. The system is deployed across three Availability Zones in the US West regi on (us-west-1) with an Application Load Balancer in front to provide high availability and fault tolera nce for the shopping platform. The us-west-1a, us-w est-1b, and us-west-1c Availability Zones have 10, 8 and 7 running instances respectively. Due to the low numb er of incoming traffic, the scale-in operation has bee n triggered. Which of the following will the Auto Scaling group do to determine which instance to terminate first i n this scenario? (Select THREE.)", "options": [ "A. Select the instance that is farthest to the ne xt billing hour.", "B. Select the instance that is closest to the nex t billing hour.", "C. Select the instances with the most recent laun ch configuration.", "D. Choose the Availability Zone with the most num ber of instances, which is the us-west-1a" ], "correct": "", "explanation": "Explanation The default termination policy is designed to help ensure that your network architecture spans Availab ility Zones evenly. With the default termination policy, the behavior of the Auto Scaling group is as follow s: 1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not pro tected from scale in. If there is more than one Ava ilability Zone with this number of instances, choose the Avai lability Zone with the instances that use the oldes t launch configuration. . 86 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam 2. Determine which unprotected instances in the sel ected Availability Zone use the oldest launch configuration. If there is one such instance, termi nate it. 3. If there are multiple instances to terminate bas ed on the above criteria, determine which unprotect ed instances are closest to the next billing hour. (Th is helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is on e such instance, terminate it. 4. If there is more than one unprotected instance c losest to the next billing hour, choose one of thes e instances at random. The following flow diagram illustrates how the defa ult termination policy works: Certkingdom . 87 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom . 88 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-termination.html#default-termination - policy Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/" }, { "question": ":An application is hosted in an On-Demand EC2 instan ce and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many o thers. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored. Which is the most suitable service that you should use to meet this requirement?", "options": [ "A. Amazon API Gateway", "B. AWS CloudTrail", "C. Amazon CloudWatch", "D. AWS X-Ray" ], "correct": "B. AWS CloudTrail", "explanation": "Explanation AWS CloudTrail increases visibility into your user and resource activity by recording AWS Management Certkingdom Console actions and API calls. You can identify whi ch users and accounts called AWS, the source IP address from which the calls were made, and when th e calls occurred. Amazon CloudWatch is incorrect because this is prim arily used for systems monitoring based on the server metrics. It does not have the capability to track API calls to your AWS resources. AWS X-Ray is incorrect because this is usually used to debug and analyze your microservices applicatio ns with request tracing so you can find the root cause of issues and performance. Unlike CloudTrail, it d oes not record the API calls that were made to your AWS resources. Amazon API Gateway is incorrect because this is not used for logging each and every API call to your AWS resources. It is a fully managed service that m akes it easy for developers to create, publish, mai ntain, monitor, and secure APIs at any scale.", "references": ". 89 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/cloudtrail/ Check out this AWS CloudTrail Cheat Sheet: https://tutorialsdojo.com/aws-cloudtrail/" }, { "question": ": A company has recently adopted a hybrid cloud archi tecture and is planning to migrate a database hoste d on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that th e database is ACID-compliant and can handle complex q ueries of the application. Which type of database service should the Architect use?", "options": [ "A. Amazon RDS", "B. Amazon Redshift", "C. Amazon DynamoDB", "D. Amazon Aurora" ], "correct": "D. Amazon Aurora", "explanation": "Explanation Amazon Aurora (Aurora) is a fully managed relationa l database engine that's compatible with MySQL and PostgreSQL. You already know how MySQL and Post greSQL combine the speed and reliability of high-end commercial databases with the simplicity a nd cost-effectiveness of open-source databases. The Certkingdom code, tools, and applications you use today with yo ur existing MySQL and PostgreSQL databases can be used with Aurora. With some workloads, Aurora can d eliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL with out requiring changes to most of your existing applications. Aurora includes a high-performance storage subsyste m. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fa st distributed storage. The underlying storage grow s automatically as needed, up to 64 tebibytes (TiB). Aurora also automates and standardizes database clustering and replication, which are typically amo ng the most challenging aspects of database configuration and administration. . 90 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam For Amazon RDS MariaDB DB instances, the maximum pr ovisioned storage limit constrains the size of a table to a maximum size of 64 TB when using InnoDB file-per-table tablespaces. This limit also constra ins the system tablespace to a maximum size of 16 TB. InnoDB file- per-table tablespaces (with tables each in their ow n tablespace) is set by default for Amazon RDS MariaD B DB instances. Certkingdom Hence, the correct answer is Amazon Aurora. Amazon Redshift is incorrect because this is primar ily used for OLAP applications and not for OLTP. Moreover, it doesn't scale automatically to handle the exponential growth of the database. Amazon DynamoDB is incorrect. Although you can use this to have an ACID-compliant database, it is not capable of handling complex queries and highly tran sactional (OLTP) workloads. Amazon RDS is incorrect. Although this service can host an ACID-compliant relational database that can handle complex queries and transactional (OLTP) wor kloads, it is still not scalable to handle the grow th of the database. Amazon Aurora is the better choice as its underlying storage can grow automatically as needed. References: . 91 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/rds/aurora/ https://docs.aws.amazon.com/amazondynamodb/latest/d eveloperguide/SQLtoNoSQL.html https://aws.amazon.com/nosql/ Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", "references": "" }, { "question": ": A healthcare company stores sensitive patient healt h records in their on-premises storage systems. The se records must be kept indefinitely and protected fro m any type of modifications once they are stored. Compliance regulations mandate that the records mus t have granular access control and each data access must be audited at all levels. Currently, there are millions of obsolete records that are not accessed by their web application, and their on-premises storage is q uickly running out of space. The Solutions Architec t must design a solution to immediately move existing records to AWS and support the ever-growing number of new health records. Which of the following is the most suitable solutio n that the Solutions Architect should implement to meet the above requirements?", "options": [ "A. Set up AWS Storage Gateway to move the existin g health records from the on-premises", "B. Set up AWS DataSync to move the existing healt h records from the on-premises network to", "C. Set up AWS Storage Gateway to move the existin g health records from the on-premises", "D. Set up AWS DataSync to move the existing healt h records from the on-premises network to" ], "correct": "B. Set up AWS DataSync to move the existing healt h records from the on-premises network to", "explanation": "Explanation AWS Storage Gateway is a set of hybrid cloud servic es that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gate way to integrate AWS Cloud storage with existing on-site workloads so they can simplify storage mana gement and reduce costs for key hybrid cloud storag e use cases. These include moving backups to the clou d, using on-premises file shares backed by cloud storage, and providing low latency access to data i n AWS for on-premises applications. AWS DataSync is an online data transfer service tha t simplifies, automates, and accelerates moving dat a between on-premises storage systems and AWS Storage services, as well as between AWS Storage services. You can use DataSync to migrate active datasets to AWS, archive data to free up on-premises storage capacity, replicate data to AWS for business contin uity, or transfer data to the cloud for analysis an d processing. Both AWS Storage Gateway and AWS DataSync can send data from your on-premises data center to AWS and vice versa. However, AWS Storage Gateway is mor e suitable to be used in integrating your storage services by replicating your data while AWS DataSyn c is better for workloads that require you to move or migrate your data. Certkingdom . 93 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can also use a combination of DataSync and File Gateway to minimize your on-premises infrastructur e while seamlessly connecting on-premises application s to your cloud storage. AWS DataSync enables you to automate and accelerate online data transfers to AWS storage services. File Gateway is a fully mana ged solution that will automate and accelerate the repl ication of data between the on-premises storage sys tems and AWS storage services. AWS CloudTrail is an AWS service that helps you ena ble governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as ev ents in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. There are two types of events that you configure yo ur CloudTrail for: - Management Events - Data Events Management Events provide visibility into managemen t operations that are performed on resources in your AWS account. These are also known as control p lane operations. Management events can also include non-API events that occur in your account. Data Events, on the other hand, provide visibility into the resource operations performed on or within a resource. These are also known as data plane operat ions. It allows granular control of data event logg ing with advanced event selectors. You can currently lo g data events on different resource types such as Amazon S3 object-level API activity (e.g. GetObject , DeleteObject, and PutObject API operations), AWS Lambda function execution activity (the Invoke API) , DynamoDB Item actions, and many more. Certkingdom . 94 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or over written for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory require ments that require WORM storage or to simply add another layer of protection against object changes and deletion. Certkingdom . 95 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can record the actions that are taken by users, roles, or AWS services on Amazon S3 resources and maintain log records for auditing and compliance pu rposes. To do this, you can use server access loggi ng, AWS CloudTrail logging, or a combination of both. A WS recommends that you use AWS CloudTrail for logging bucket and object-level actions for your Am azon S3 resources. Hence, the correct answer is: Set up AWS DataSync t o move the existing health records from the on- premises network to the AWS Cloud. Launch a new Ama zon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket. The option that says: Set up AWS Storage Gateway to move the existing health records from the on- premises network to the AWS Cloud. Launch a new Ama zon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Even ts and Amazon S3 Object Lock in the bucket is incorrect. The requirement explicitly say s that the Solutions Architect must immediately mov e the existing records to AWS and not integrate or re plicate the data. Using AWS DataSync is a more suitable service to use here since the primary obje ctive is to migrate or move data. You also have to use Data Events here and not Management Events in Cloud Trail, to properly track all the data access and changes to your objects. The option that says: Set up AWS Storage Gateway to move the existing health records from the on- premises network to the AWS Cloud. Launch an Amazon EBS-backed EC2 instance to store both the existing and new records. Enable Amazon S3 server a ccess logging and S3 Object Lock in the bucket is incorrect. Just as mentioned in the previous opt ion, using AWS Storage Gateway is not a recommended service to use in this situation since the objectiv e is to move the obsolete data. Moreover, using Ama zon EBS to store health records is not a scalable solut ion compared with Amazon S3. Enabling server access logging can help audit the stored objects. However, it is better to CloudTrail as it provides more gra nular access control and tracking. Certkingdom The option that says: Set up AWS DataSync to move t he existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bu cket to store existing and new records. Enable AWS CloudTrail with Management Events and Am azon S3 Object Lock in the bucket is incorrect. Although it is right to use AWS DataSync to move the health records, you still have to conf igure Data Events in AWS CloudTrail and not Management Ev ents. This type of event only provides visibility into management operations that are performed on re sources in your AWS account and not the data events that are happening in the individual objects in Ama zon S3. References: https://aws.amazon.com/datasync/faqs/ https://aws.amazon.com/about-aws/whats-new/2020/12/ aws-cloudtrail-provides-more-granular-control-of- data-event-logging/ . 96 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AmazonS3/latest/usergui de/object-lock.html Check out this AWS DataSync Cheat Sheet: https://tutorialsdojo.com/aws-datasync/ AWS Storage Gateway vs DataSync: https://www.youtube.com/watch?v=tmfe1rO-AUs", "references": "" }, { "question": ": A top IT Consultancy has a VPC with two On-Demand E C2 instances with Elastic IP addresses. You were notified that the EC2 instances are currently under SSH brute force attacks over the Internet. The IT Security team has identified the IP addresses where these attacks originated. You have to immediately implement a temporary fix to stop these attacks whi le the team is setting up AWS WAF, GuardDuty, and AWS Shield Advanced to permanently fix the security vulnerability. Which of the following provides the quickest way to stop the attacks to the instances?", "options": [ "A. Remove the Internet Gateway from the VPC", "B. Assign a static Anycast IP address to each EC2 instance", "C. Place the EC2 instances into private subnets", "D. Block the IP addresses in the Network Access C ontrol List" ], "correct": "D. Block the IP addresses in the Network Access C ontrol List", "explanation": "Explanation Certkingdom A network access control list (ACL) is an optional layer of security for your VPC that acts as a firew all for controlling traffic in and out of one or more s ubnets. You might set up network ACLs with rules si milar to your security groups in order to add an addition al layer of security to your VPC. . 97 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom The following are the basic things that you need to know about network ACLs: - Your VPC automatically comes with a modifiable de fault network ACL. By default, it allows all inboun d and outbound IPv4 traffic and, if applicable, IPv6 traffic. - You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until y ou add rules. - Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatica lly associated with the default network ACL. - You can associate a network ACL with multiple sub nets; however, a subnet can be associated with only one network ACL at a time. When you associate a net work ACL with a subnet, the previous association is removed. . 98 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam - A network ACL contains a numbered list of rules t hat we evaluate in order, starting with the lowest numbered rule, to determine whether traffic is allo wed in or out of any subnet associated with the net work ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on. - A network ACL has separate inbound and outbound r ules, and each rule can either allow or deny traffi c. - Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbou nd traffic (and vice versa). The scenario clearly states that it requires the qu ickest way to fix the security vulnerability. In th is situation, you can manually block the offending IP addresses u sing Network ACLs since the IT Security team alread y identified the list of offending IP addresses. Alte rnatively, you can set up a bastion host, however, this option entails additional time to properly set up a s you have to configure the security configurations of your bastion host. Hence, blocking the IP addresses in the Network Acc ess Control List is the best answer since it can quickly resolve the issue by blocking the IP addres ses using Network ACL. Placing the EC2 instances into private subnets is i ncorrect because if you deploy the EC2 instance in the private subnet without public or EIP address, it wo uld not be accessible over the Internet, even to yo u. Removing the Internet Gateway from the VPC is incor rect because doing this will also make your EC2 instance inaccessible to you as it will cut down th e connection to the Internet. Assigning a static Anycast IP address to each EC2 i nstance is incorrect because a static Anycast IP address is primarily used by AWS Global Accelerator to enable organizations to seamlessly route traffi c to Certkingdom multiple regions and improve availability and perfo rmance for their end-users. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/VPC_ACLs.html https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html Security Group vs NACL: https://tutorialsdojo.com/security-group-vs-nacl/", "references": "" }, { "question": ": . 99 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam A web application hosted in an Auto Scaling group o f EC2 instances in AWS. The application receives a burst of traffic every morning, and a lot of users are complaining about request timeouts. The EC2 ins tance takes 1 minute to boot up before it can respond to user requests. The cloud architecture must be redes igned to better respond to the changing traffic of the ap plication. How should the Solutions Architect redesign the arc hitecture?", "options": [ "A. Create a new launch template and upgrade the s ize of the instance.", "B. Create a step scaling policy and configure an instance warm-up time condition.", "C. Create a CloudFront distribution and set the E C2 instance as the origin.", "D. Create a Network Load Balancer with slow-start mode." ], "correct": "B. Create a step scaling policy and configure an instance warm-up time condition.", "explanation": "Explanation Amazon EC2 Auto Scaling helps you maintain applicat ion availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use t he dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic sc aling and predictive scaling can be used together t o scale faster. Certkingdom . 100 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Certkingdom Step scaling applies \"step adjustments\" which means you can set multiple actions to vary the scaling depending on the size of the alarm breach. When you create a step scaling policy, you can also specify the number of seconds that it takes for a newly launche d instance to warm up. Hence, the correct answer is: Create a step scaling policy and configure an instance warm-up time condition. . 101 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Create a Network Load Balance r with slow start mode is incorrect because Network Load Balancer does not support slow start m ode. If you need to enable slow start mode, you should use Application Load Balancer. The option that says: Create a new launch template and upgrade the size of the instance is incorrect because a larger instance does not always improve t he boot time. Instead of upgrading the instance, yo u should create a step scaling policy and add a warm- up time. The option that says: Create a CloudFront distribut ion and set the EC2 instance as the origin is incorrect because this approach only resolves the t raffic latency. Take note that the requirement in t he scenario is to resolve the timeout issue and not th e traffic latency. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-scaling-simple-step.html https://aws.amazon.com/ec2/autoscaling/faqs/ Check out these AWS Cheat Sheets: https://tutorialsdojo.com/aws-auto-scaling/ https://tutorialsdojo.com/step-scaling-vs-simple-sc aling-policies-in-amazon-ec2/", "references": "" }, { "question": ": A Solutions Architect joined a large tech company w ith an existing Amazon VPC. When reviewing the Auto Scaling events, the Architect noticed that the ir web application is scaling up and down multiple times Certkingdom within the hour. What design change could the Architect make to opti mize cost while preserving elasticity? A. Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher threshold", "options": [ "B. Add provisioned IOPS to the instances", "C. Increase the base number of Auto Scaling insta nces for the Auto Scaling group", "D. Increase the instance type in the launch confi guration" ], "correct": "", "explanation": "Explanation . 102 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Since the application is scaling up and down multip le times within the hour, the issue lies on the coo ldown period of the Auto Scaling group. The cooldown period is a configurable setting for y our Auto Scaling group that helps to ensure that it doesn't launch or terminate additional instances be fore the previous scaling activity takes effect. Af ter the Certkingdom Auto Scaling group dynamically scales using a simpl e scaling policy, it waits for the cooldown period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, th e default is not to wait for the cooldown period, b ut you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instanc e.", "references": "http://docs.aws.amazon.com/autoscaling/latest/userg uide/as-scale-based-on-demand.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ . 103 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam" }, { "question": ": A data analytics startup is collecting clickstream data and stores them in an S3 bucket. You need to l aunch an AWS Lambda function to trigger the ETL jobs to r un as soon as new data becomes available in Amazon S3. Which of the following services can you use as an e xtract, transform, and load (ETL) service in this scenario?", "options": [ "A. S3 Select", "B. AWS Glue", "C. Redshift Spectrum D. AWS Step Functions" ], "correct": "B. AWS Glue", "explanation": "Explanation AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customer s to prepare and load their data for analytics. You c an create and run an ETL job with a few clicks in t he AWS Management Console. You simply point AWS Glue t o your data stored on AWS, and AWS Glue discovers your data and stores the associated metad ata (e.g. table definition and schema) in the AWS G lue Data Catalog. Once cataloged, your data is immediat ely searchable, queryable, and available for ETL. AWS Glue generates the code to execute your data tr ansformations and data loading processes. Certkingdom . 104 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "references": "https://aws.amazon.com/glue/ Check out this AWS Glue Cheat Sheet: https://tutorialsdojo.com/aws-glue/" }, { "question": ": A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers inpu t data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on re dundancy or availability. Which solution will accomplish this?", "options": [ "A. Deploy a Transit Gateway to peer connection be tween the instance and the S3 bucket.", "B. Re-assign the NAT Gateway to a lower EC2 insta nce type.", "C. Replace the NAT Gateway with a NAT instance ho sted on a burstable instance type.", "D. Remove the NAT Gateway and use a Gateway VPC e ndpoint to access the S3 bucket from" ], "correct": "D. Remove the NAT Gateway and use a Gateway VPC e ndpoint to access the S3 bucket from", "explanation": "Explanation A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC Certkingdom over the AWS network. Interface endpoints extend th e functionality of gateway endpoints by using priva te IP addresses to route requests to Amazon S3 from wi thin your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gat eway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endp oints in the same VPC. . 105 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam There is no additional charge for using gateway end points. However, standard charges for data transfer and resource usage still apply. Hence, the correct answer is: Remove the NAT Gatewa y and use a Gateway VPC endpoint to access the S3 bucket from the instance. The option that says: Replace the NAT Gateway with a NAT instance hosted on burstable instance type is incorrect. This solution may possibly reduc e costs, but the availability and redundancy will b e compromised. Certkingdom The option that says: Deploy a Transit Gateway to p eer connection between the instance and the S3 bucket is incorrect. Transit Gateway is a service t hat is specifically used for connecting multiple VP Cs through a central hub. The option that says: Re-assign the NAT Gateway to a lower EC2 instance type is incorrect. NAT Gateways are fully managed resources. You cannot ac cess nor modify the underlying instance that hosts it. References: https://docs.aws.amazon.com/AmazonS3/latest/usergui de/privatelink-interface-endpoints.html https://docs.aws.amazon.com/vpc/latest/privatelink/ vpce-gateway.html Amazon VPC Overview: . 106 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://youtu.be/oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A top investment bank is in the process of building a new Forex trading platform. To ensure high availability and scalability, you designed the trad ing platform to use an Elastic Load Balancer in fro nt of an Auto Scaling group of On-Demand EC2 instances acros s multiple Availability Zones. For its database tie r, you chose to use a single Amazon Aurora instance to take advantage of its distributed, fault-tolerant, and self-healing storage system. In the event of system failure on the primary datab ase instance, what happens to Amazon Aurora during the failover?", "options": [ "A. Aurora will attempt to create a new DB Instanc e in the same Availability Zone as the", "B. Aurora will first attempt to create a new DB I nstance in a different Availability Zone of the", "C. Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at", "D. Amazon Aurora flips the A record of your DB In stance to point at the healthy replica," ], "correct": "A. Aurora will attempt to create a new DB Instanc e in the same Availability Zone as the", "explanation": "Explanation Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual ad ministrative intervention. . 107 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAM E) for your DB Instance to point at the healthy replica, which in turn is promoted to become the ne w primary. Start-to-finish, failover typically comp letes within 30 seconds. If you are running Aurora Serverless and the DB ins tance or AZ become unavailable, Aurora will automatically recreate the DB instance in a differe nt AZ. If you do not have an Amazon Aurora Replica (i.e. s ingle instance) and are not running Aurora Serverle ss, Certkingdom Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance . This replacement of the original instance is done o n a best-effort basis and may not succeed, for exam ple, if there is an issue that is broadly affecting the Ava ilability Zone. Hence, the correct answer is the option that says: Aurora will attempt to create a new DB Instance in the same Availability Zone as the original instance and is done on a best-effort basis. The options that say: Amazon Aurora flips the canon ical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is p romoted to become the new primary and Amazon Aurora flips the A record of your DB Instance to po int at the healthy replica, which in turn is promoted to become the new primary are incorrect be cause this will only happen if you are using an Amazon Aurora Replica. In addition, Amazon Aurora f lips the canonical name record (CNAME) and not the A record (IP address) of the instance. . 108 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Aurora will first attempt to create a new DB Instance in a different Availabilit y Zone of the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in the original Availability Zone in which the instanc e was first launched is incorrect because Aurora wi ll first attempt to create a new DB Instance in the sa me Availability Zone as the original instance. If u nable to do so, Aurora will attempt to create a new DB Insta nce in a different Availability Zone and not the ot her way around. References: https://aws.amazon.com/rds/aurora/faqs/ https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/Concepts.AuroraHighAvailability.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", "references": "" }, { "question": ": The social media company that you are working for n eeds to capture the detailed information of all HTTP requests that went through their public-facing application load balancer every five minutes. They want to use this data for analyzing traffic pa tterns and for troubleshooting their web applications in AWS. Which of the following options meet the customer requirements? Certkingdom", "options": [ "A. Enable Amazon CloudWatch metrics on the applicati on load balancer.", "B. Enable AWS CloudTrail for their application load balancer.", "C. Add an Amazon CloudWatch Logs agent on the applic ation load balancer.", "D. Enable access logs on the application load balanc er." ], "correct": "D. Enable access logs on the application load balanc er.", "explanation": "Explanation Elastic Load Balancing provides access logs that ca pture detailed information about requests sent to y our load balancer. Each log contains information such a s the time the request was received, the client's I P . 109 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam address, latencies, request paths, and server respo nses. You can use these access logs to analyze traf fic patterns and troubleshoot issues. Access logging is an optional feature of Elastic Lo ad Balancing that is disabled by default. After youenable access logging for your load balancer, Elast ic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. Certkingdom Hence, the correct answer is: Enable access logs on the application load balancer. The option that says: Enable AWS CloudTrail for the ir application load balancer is incorrect because AWS CloudTrail is primarily used to monitor and rec ord the account activity across your AWS resources and not your web applications. You cannot use Cloud Trail to capture the detailed information of all HT TP requests that go through your public-facing Applica tion Load Balancer (ALB). CloudTrail can only track the resource changes made to your ALB, but not the actual IP traffic that goes through it. For this us e case, you have to enable the access logs feature instead. The option that says: Add an Amazon CloudWatch Logs agent on the application load balancer is incorrect because you cannot directly install a Clo udWatch Logs agent to an Application Load Balancer. This is commonly installed on an Amazon EC2 instanc e and not on a load balancer. . 110 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam The option that says: Enable Amazon CloudWatch metr ics on the application load balancer is incorrect because CloudWatch doesn't track the actu al traffic to your ALB. It only monitors the change s to your ALB itself and the actual IP traffic that it d istributes to the target groups. References: http://docs.aws.amazon.com/elasticloadbalancing/lat est/application/load-balancer-access-logs.html https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-monitoring.html AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer vs Gateway Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", "references": "" }, { "question": ": A company has an application hosted in an Auto Scal ing group of Amazon EC2 instances across multiple Availability Zones behind an Application Load Balan cer. There are several occasions where some instances are automatically terminated after failin g the HTTPS health checks in the ALB and then purge s Certkingdom all the ephemeral logs stored in the instance. A So lutions Architect must implement a solution that co llects all of the application and server logs effectively. She should be able to perform a root cause analysi s based on the logs, even if the Auto Scaling group immedia tely terminated the instance. What is the EASIEST way for the Architect to automa te the log collection from the Amazon EC2 instances ?", "options": [ "A. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating state", "B. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating state", "D. Add a lifecycle hook to your Auto Scaling group t o move instances in the Terminating state to the" ], "correct": "B. Add a lifecycle hook to your Auto Scaling grou p to move instances in the Terminating state", "explanation": "Explanation The EC2 instances in an Auto Scaling group have a p ath, or lifecycle, that differs from that of other EC2 instances. The lifecycle starts when the Auto Scali ng group launches an instance and puts it into serv ice. The lifecycle ends when you terminate the instance, or the Auto Scaling group takes the instance out o f service and terminates it. You can add a lifecycle hook to your Auto Scaling g roup so that you can perform custom actions when instances launch or terminate. Certkingdom When Amazon EC2 Auto Scaling responds to a scale ou t event, it launches one or more instances. These instances start in the Pending state. If you added an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook to your Auto Scaling group, the instances move from the Pending state to the Pending:Wait state. After you complete the lifecycle action, the instan ces enter the Pending:Proceed state. When the insta nces are fully configured, they are attached to the Auto Scaling group and they enter the InService state. When Amazon EC2 Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state. If you added an autoscaling:EC2_INSTANCE_TERMINATING lifecycle hook to your Auto Scaling group, the instances move from the Terminating state to the Terminating: Wait state. After you complete the lifecycle action , the instances enter the Terminating:Proceed state. When the instances are fully terminated, they enter the Terminated state. . 112 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Using CloudWatch agent is the most suitable tool to use to collect the logs. The unified CloudWatch ag ent enables you to do the following: - Collect more system-level metrics from Amazon EC2 instances across operating systems. The metrics ca n include in-guest metrics, in addition to the metric s for EC2 instances. The additional metrics that ca n be collected are listed in Metrics Collected by the Cl oudWatch Agent . Certkingdom - Collect system-level metrics from on-premises ser vers. These can include servers in a hybrid environ ment as well as servers not managed by AWS. - Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and serve rs running Windows Server. collectd is supported on ly on Linux servers. - Collect logs from Amazon EC2 instances and on-pre mises servers, running either Linux or Windows Server. . 113 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can store and view the metrics that you collect with the CloudWatch agent in CloudWatch just as yo u Certkingdom can with any other CloudWatch metrics. The default namespace for metrics collected by the CloudWatch agent is CWAgent, although you can specify a differ ent namespace when you configure the agent. Hence, the correct answer is: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for the EC2 Inst ance-terminate Lifecycle Action Auto Scaling Event with an associated Lambda function. Trigger t he CloudWatch agent to push the application logs and then resume the instance termination once all t he logs are sent to CloudWatch Logs. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Pending:Wait state to dela y the termination of the unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for t he EC2 Instance-terminate Lifecycle Action Auto Scaling Event with an associated Lambda function. S et up an AWS Systems Manager Automation script . 114 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam that collects and uploads the application logs from the instance to a CloudWatch Logs group. Configure the solution to only resume the instance terminatio n once all the logs were successfully sent is incor rect because the Pending:Wait state refers to the scale- out action in Amazon EC2 Auto Scaling and not for scale-in or for terminating the instances. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of the unhealthy Amazon EC2 instances. Set up AWS Step Functions to collect the application logs and send them to a CloudWatch Log group. Configure the solution to resume the ins tance termination as soon as all the logs were successfully sent to CloudWatch Logs is incorrect b ecause using AWS Step Functions is inappropriate in collecting the logs from your EC2 instances. You sh ould use a CloudWatch agent instead. The option that says: Add a lifecycle hook to your Auto Scaling group to move instances in the Terminating state to the Terminating:Wait state to delay the termination of the unhealthy Amazon EC2 instances. Configure a CloudWatch Events rule for t he EC2 Instance Terminate Successful Auto Scaling Event with an associated Lambda function. S et up the AWS Systems Manager Run Command service to run a script that collects and uploads t he application logs from the instance to a CloudWat ch Logs group. Resume the instance termination once al l the logs are sent is incorrect because although t his solution could work, it entails a lot of effort to write a custom script that the AWS Systems Manager Run Command will run. Remember that the scenario asks f or a solution that you can implement with the least amount of effort. This solution can be simplified b y automatically uploading the logs using a CloudWat ch Agent. You have to use the EC2 Instance-terminate L ifecycle Action event instead. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/AutoScalingGroupLifecycle.html Certkingdom https://docs.aws.amazon.com/autoscaling/ec2/usergui de/cloud-watch-events.html#terminate-successful https://aws.amazon.com/premiumsupport/knowledge-cen ter/auto-scaling-delay-termination/ Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A company needs to set up a cost-effective architec ture for a log processing application that has freq uently accessed, throughput-intensive workloads with large , sequential I/O operations. The application should be hosted in an already existing On-Demand EC2 instanc e in the VPC. You have to attach a new EBS volume that will be used by the application. Which of the following is the most suitable EBS vol ume type that you should use in this scenario? . 115 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam", "options": [ "A. EBS Throughput Optimized HDD (st1)", "B. EBS General Purpose SSD (gp2)", "C. EBS Provisioned IOPS SSD (io1)", "D. EBS Cold HDD (sc1)" ], "correct": "A. EBS Throughput Optimized HDD (st1)", "explanation": "Explanation In the exam, always consider the difference between SSD and HDD as shown on the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, dependin g on whether the question asks for a storage type whi ch has small, random I/O operations or large, sequential I/O operations. Since the scenario has workloads with large, sequen tial I/O operations, we can narrow down our options by selecting HDD volumes, instead of SDD volumes which are more suitable for small, random I/O operations . Certkingdom . 116 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Throughput Optimized HDD (st1) volumes provide low- cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volum e type is a good fit for large, sequential workload s such as Amazon EMR, ETL, data warehouses, and log proces sing. Bootable st1 volumes are not supported. Throughput Optimized HDD (st1) volumes, though simi lar to Cold HDD (sc1) volumes, are designed to support frequently accessed data. EBS Provisioned IOPS SSD (io1) is incorrect because Amazon EBS Provisioned IOPS SSD is not the most cost-effective EBS type and is primarily used for critical business applications that require sus tained IOPS performance. EBS General Purpose SSD (gp2) is incorrect. Althoug h an Amazon EBS General Purpose SSD volume balances price and performance for a wide variety o f workloads, it is not suitable for frequently acce ssed, throughput-intensive workloads. Throughput Optimize d HDD is a more suitable option to use than General Purpose SSD. EBS Cold HDD (sc1) is incorrect. Although this prov ides lower cost HDD volume compared to General Purpose SSD, it is much suitable for less frequentl y accessed workloads.", "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSVolumeTypes.html#EBSVolumeTypes_st1 Amazon EBS Overview - SSD vs HDD: https://www.youtube.com/watch?v=LW7x8wyLFvw&t=8s Check out this Amazon EBS Cheat Sheet: Certkingdom https://tutorialsdojo.com/amazon-ebs/" }, { "question": ": A company plans to design a highly available archit ecture in AWS. They have two target groups with thr ee EC2 instances each, which are added to an Applicati on Load Balancer. In the security group of the EC2 instance, you have verified that port 80 for HTTP i s allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?", "options": [ "A. The wrong subnet was used in your VPC", "B. The instances are using the wrong AMI.", "C. The health check configuration is not properly defined.", "D. The wrong instance type was used for the EC2 i nstance." ], "correct": "C. The health check configuration is not properly defined.", "explanation": "Explanation Since the security group is properly configured, th e issue may be caused by a wrong health check configuration in the Target Group. Certkingdom Your Application Load Balancer periodically sends r equests to its registered targets to test their sta tus. These tests are called health checks. Each load bal ancer node routes requests only to the healthy targ ets in the enabled Availability Zones for the load balance r. Each load balancer node checks the health of eac h target, using the health check settings for the tar get group with which the target is registered. Afte r your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connec tion that was established for the health check.", "references": ". 118 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam http://docs.aws.amazon.com/elasticloadbalancing/lat est/classic/elb-healthchecks.html AWS Elastic Load Balancing Overview: https://www.youtube.com/watch?v=UBl5dw59DO8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ ELB Health Checks vs Route 53 Health Checks For Tar get Health Monitoring: https://tutorialsdojo.com/elb-health-checks-vs-rout e-53-health-checks-for-target-health-monitoring/" }, { "question": ": A company is using AWS IAM to manage access to AWS services. The Solutions Architect of the company created the following IAM policy for AWS La mbda: 1. 1. { 1. 2. \"Version\": \"2012-10-17\", 2. 3. \"Statement\": [ 3. 4. { 4. 5. \"Effect\": \"Allow\", 5. 6. \"Action\": [ 6. 7. \"lambda:CreateFunction\", 7. 8. \"lambda:DeleteFunction\" Certkingdom 8. 9. ], 9. 10. \"Resource\": \"*\" 10.11. }, 11.12. { 12.13. \"Effect\": \"Deny\", 13.14. \"Action\": [ 14.15. \"lambda:CreateFunction\", 15.16. \"lambda:DeleteFunction\", 16.17. \"lambda:InvokeFunction\", 17.18. \"lambda:TagResource\" 18.19. ], 19.20. \"Resource\": \"*\", 20.21. \"Condition\": { 21.22. \"IpAddress\": { 22.23. \"aws:SourceIp\": \"187.5.104.11/32\" . 119 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam 23.24. } 24.25. } 25.26. } 26.27. ] 27.28. }", "options": [ "A. Delete an AWS Lambda function from any network address.", "B. Create an AWS Lambda function using the 187.5. 104.11/32 address.", "C. Delete an AWS Lambda function using the 187.5. 104.11/32 address.", "D. Create an AWS Lambda function using the 100.22 0.0.11/32 address." ], "correct": "D. Create an AWS Lambda function using the 100.22 0.0.11/32 address.", "explanation": "Explanation You manage access in AWS by creating policies and a ttaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an o bject in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determ ine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. Certkingdom . 120 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam You can use AWS Identity and Access Management (IAM ) to manage access to the Lambda API and resources like functions and layers. Based on the g iven IAM policy, you can create and delete a Lambda function from any network address except for the IP address 187.5.104.11/32. Since the IP address, 100.220.0.11/32 is not denied in the policy, you ca n use this address to create a Lambda function. Hence, the correct answer is: Create an AWS Lambda function using the 100.220.0.11/32 address. The option that says: Delete an AWS Lambda function using the 187.5.104.11/32 address is incorrect because the source IP used in this option is denied by the IAM policy. The option that says: Delete an AWS Lambda function from any network address is incorrect. You can't delete a Lambda function from any network add ress because the address 187.5.104.11/32 is denied by the policy. The option that says: Create an AWS Lambda function using the 187.5.104.11/32 address is incorrect. Just like the option above, the IAM policy denied t he IP address 187.5.104.11/32. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/ac cess_policies.html https://docs.aws.amazon.com/lambda/latest/dg/lambda -permissions.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": Certkingdom A company has multiple AWS Site-to-Site VPN connect ions placed between their VPCs and their remote network. During peak hours, many employees are expe riencing slow connectivity issues, which limits the ir productivity. The company has asked a solutions arc hitect to scale the throughput of the VPN connectio ns. Which solution should the architect carry out?", "options": [ "A. Add more virtual private gateways to a VPC and enable Equal Cost Multipath Routing", "B. Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway", "C. Re-route some of the VPN connections to a seco ndary customer gateway device on the", "D. Modify the VPN configuration by increasing the number of tunnels to scale the throughput." ], "correct": "B. Associate the VPCs to an Equal Cost Multipath Routing (ECMR)-enabled transit gateway", "explanation": "Explanation With AWS Transit Gateway, you can simplify the conn ectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a sing le VPN connection. AWS Transit Gateway also enables you to scale the I Psec VPN throughput with equal-cost multi-path (ECMP) routing support over multiple VPN tunnels. A single VPN tunnel still has a maximum throughput of 1.25 Gbps. If you establish multiple VPN tunnels to an ECMP-enabled transit gateway, it can scale beyond the default limit of 1.25 Gbps. Hence, the correct answer is: Associate the VPCs to an Equal Cost Multipath Routing (ECMR)- enabled transit gateway and attach additional VPN t unnels. The option that says: Add more virtual private gate ways to a VPC and enable Equal Cost Multipath Certkingdom Routing (ECMR) to get higher VPN bandwidth is incor rect because a VPC can only have a single virtual private gateway attached to it one at a tim e. Also, there is no option to enable ECMR in a vir tual private gateway. The option that says: Modify the VPN configuration by increasing the number of tunnels to scale the throughput is incorrect. The maximum tunnel for a V PN connection is two. You cannot increase this beyond its limit. The option that says: Re-route some of the VPN conn ections to a secondary customer gateway device on the remote network's end is incorrect. This woul d only increase connection redundancy and won't increase throughput. For example, connections can f ailover to the secondary customer gateway device in case the primary customer gateway device becomes un available. References: . 122 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://aws.amazon.com/premiumsupport/knowledge-cen ter/transit-gateway-ecmp-multiple-tunnels/ https://aws.amazon.com/blogs/networking-and-content -delivery/scaling-vpn-throughput-using-aws-transit- gateway/ Check out this AWS Transit Gateway Cheat Sheet: https://tutorialsdojo.com/aws-transit-gateway/", "references": "" }, { "question": ": A company has a web application hosted in their on- premises infrastructure that they want to migrate t o AWS cloud. Your manager has instructed you to ensur e that there is no downtime while the migration process is on-going. In order to achieve this, your team decided to divert 50% of the traffic to the n ew application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure. On ce the migration is over and the application works wit h no issues, a full diversion to AWS will be implemented. The company's VPC is connected to its on-premises network via an AWS Direct Connect connection. Which of the following are the possible solutions t hat you can implement to satisfy the above requirem ent? (Select TWO.) A. Use an Application Elastic Load balancer with Weighted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the o ther 50% to the application hosted in their on-premises infrastructure.", "options": [ "B. Use Route 53 with Failover routing policy to d ivert and proportion the traffic between the", "C. Use AWS Global Accelerator to divert and propo rtion the HTTP and HTTPS traffic", "D. Use a Network Load balancer with Weighted Targ et Groups to divert the traffic between" ], "correct": "", "explanation": "Explanation . 123 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Application Load Balancers support Weighted Target Groups routing. With this feature, you will be able to do weighted routing of the traffic forwarded by a r ule to multiple target groups. This enables various use cases like blue-green, canary and hybrid deployment s without the need for multiple load balancers. It even enables zero-downtime migration between on-premises and cloud or between different compute types like EC2 and Lambda. To divert 50% of the traffic to the new application in AWS and the other 50% to the application, you c an also use Route 53 with Weighted routing policy. Thi s will divert the traffic between the on-premises a nd AWS-hosted application accordingly. Certkingdom Weighted routing lets you associate multiple resour ces with a single domain name (tutorialsdojo.com) o r subdomain name (portal.tutorialsdojo.com) and choos e how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software . You can set a specific percentage of how much traffic w ill be allocated to the resource by specifying the weights. For example, if you want to send a tiny portion of your traffic to one resource and the rest to anothe r resource, you might specify weights of 1 and 255. T he resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/ 256ths (255/1+255). You can gradually change the balance by changing th e weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0. . 124 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam When you create a target group in your Application Load Balancer, you specify its target type. This determines the type of target you specify when regi stering with this target group. You can select the following target types: 1. instance - The targets are specified by instance ID. 2. ip - The targets are IP addresses. 3. Lambd a - The target is a Lambda function. Certkingdom When the target type is ip, you can specify IP addr esses from one of the following CIDR blocks: - 10.0.0.0/8 (RFC 1918) - 100.64.0.0/10 (RFC 6598) - 172.16.0.0/12 (RFC 1918) - 192.168.0.0/16 (RFC 1918) - The subnets of the VPC for the target group These supported CIDR blocks enable you to register the following with a target group: ClassicLink instances, instances in a VPC that is peered to the load balancer VPC, AWS resources that are addressa ble by IP address and port (for example, databases), an d on-premises resources linked to AWS through AWS Direct Connect or a VPN connection. Take note that you can not specify publicly routabl e IP addresses. If you specify targets using an ins tance ID, traffic is routed to instances using the primar y private IP address specified in the primary netwo rk interface for the instance. If you specify targets using IP addresses, you can route traffic to an ins tance using any private IP address from one or more netwo rk interfaces. This enables multiple applications o n an instance to use the same port. Each network interfa ce can have its own security group. . 125 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Hence, the correct answers are the following option s: - Use an Application Elastic Load balancer with Wei ghted Target Groups to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the app lication hosted in their on-premises infrastructure. - Use Route 53 with Weighted routing policy to dive rt the traffic between the on-premises and AWS- hosted application. Divert 50% of the traffic to th e new application in AWS and the other 50% to the application hosted in their on-premises infrastruct ure. The option that says: Use a Network Load balancer w ith Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure is incorrect because a Network Load balancer doesn' t have Weighted Target Groups to divert the traffic between the on-premises and AWS-hosted application. The option that says: Use Route 53 with Failover ro uting policy to divert and proportion the traffic between the on-premises and AWS-hosted application. Divert 50% of the traffic to the new application in AWS and the other 50% to the applica tion hosted in their on-premises infrastructure is incorrect because you cannot divert and proporti on the traffic between the on-premises and AWS-host ed application using Route 53 with Failover routing po licy. This is primarily used if you want to configu re active-passive failover to your application archite cture. The option that says: Use AWS Global Accelerator to divert and proportion the HTTP and HTTPS traffic between the on-premises and AWS-hosted appl ication. Ensure that the on-premises network has an AnyCast static IP address and is connected t o your VPC via a Direct Connect Gateway is Certkingdom incorrect because although you can control the prop ortion of traffic directed to each endpoint using A WS Global Accelerator by assigning weights across the endpoints, it is still wrong to use a Direct Connec t Gateway and an AnyCast IP address since these are n ot required at all. You can only associate static I P addresses provided by AWS Global Accelerator to reg ional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2 Ins tances, and Elastic IP addresses. Take note that a Direct Connect Gateway, per se, doesn't establish a connection from your on-premises network to your Amazon VPCs. It simply enables you to use your AWS Direct Connect connection to connect to two or more VPCs that are located in different AWS Regions . References: http://docs.aws.amazon.com/Route53/latest/Developer Guide/routing-policy.html https://aws.amazon.com/blogs/aws/new-application-lo ad-balancer-simplifies-deployment-with-weighted- target-groups/ . 126 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-target-groups.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ": An operations team has an application running on EC 2 instances inside two custom VPCs. The VPCs are located in the Ohio and N.Virginia Region respectiv ely. The team wants to transfer data between the instances without traversing the public internet. Which combination of steps will achieve this? (Sele ct TWO.)", "options": [ "A. Re-configure the route table's target and dest ination of the instances' subnet.", "B. Deploy a VPC endpoint on each region to enable a private connection.", "C. Create an Egress-only Internet Gateway.", "D. Set up a VPC peering connection between the VP Cs." ], "correct": "", "explanation": "Explanation A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 a ddresses. Instances in either VPC can communicate with each other as if they are within the same netw ork. You can create a VPC peering connection betwee n Certkingdom your own VPCs, or with a VPC in another AWS account . The VPCs can be in different regions (also known as an inter-region VPC peering connection). . 127 of 128 Exam AWS Certified Solutions Architect Associate ( SAA-C02 / SAA-C03 ) Exam Inter-Region VPC Peering provides a simple and cost -effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and hig hly available technology that powers VPC today, Inter-R egion VPC Peering encrypts inter-region traffic wit h no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stay s on the global AWS backbone and never traverses the pub lic internet, thereby reducing threat vectors, such as common exploits and DDoS attacks. Hence, the correct answers are: - Set up a VPC peering connection between the VPCs. - Re-configure the route table's target and destina tion of the instances' subnet. The option that says: Create an Egress only Interne t Gateway is incorrect because this will just enabl e outbound IPv6 communication from instances in a VPC to the internet. Take note that the scenario requi res private communication to be enabled between VPCs fr om two different regions. The option that says: Launch a NAT Gateway in the p ublic subnet of each VPC is incorrect because NAT Gateways are used to allow instances in private subnets to access the public internet. Note that t he requirement is to make sure that communication betw een instances will not traverse the internet. The option that says: Deploy a VPC endpoint on each region to enable private connection is incorrect. VPC endpoints are region-specific only and do not s upport inter-region communication. References: https://docs.aws.amazon.com/vpc/latest/peering/what -is-vpc-peering.html Certkingdom https://aws.amazon.com/about-aws/whats-new/2017/11/ announcing-support-for-inter-region-vpc-peering/ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/ . 128 of 128 Exam F", "references": "" }, { "question": ": A company plans to design an application that can h andle batch processing of large amounts of financia l data. The Solutions Architect is tasked to create t wo Amazon S3 buckets to store the input and output data. The application will transfer the data between mult iple EC2 instances over the network to complete the data processing. Which of the following options would reduce the dat a transfer costs?", "options": [ "A. Deploy the Amazon EC2 instances in private sub nets in different Availability Zones.", "B. Deploy the Amazon EC2 instances in the same Av ailability Zone.", "C. Deploy the Amazon EC2 instances in the same AW S Region.", "D. Deploy the Amazon EC2 instances behind an Appl ication Load Balancer." ], "correct": "B. Deploy the Amazon EC2 instances in the same Av ailability Zone.", "explanation": "Explanation Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminat es your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requir ements or spikes in popularity, reducing your need to forecast traffic. 2 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam In this scenario, you should deploy all the EC2 ins tances in the same Availability Zone. If you recall , data transferred between Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache instances, and Elastic Network Interfaces in the same Availability Zone is free. Instead of using the public network to transfer the data, you can use the private network to reduce the overall data transfer costs. Hence, the correct answer is: Deploy the Amazon EC2 instances in the same Availability Zone. The option that says: Deploy the Amazon EC2 instanc es in the same AWS Region is incorrect because even if the instances are deployed in the same Regi on, they could still be charged with inter-Availabi lity Zone data transfers if the instances are distribute d across different availability zones. You must dep loy the instances in the same Availability Zone to avoid th e data transfer costs. The option that says: Deploy the Amazon EC2 instanc es behind an Application Load Balancer is incorrect because this approach won't reduce the ov erall data transfer costs. An Application Load Bala ncer is primarily used to distribute the incoming traffi c to underlying EC2 instances. The option that says: Deploy the Amazon EC2 instanc es in private subnets in different Availability Zones is incorrect. Although the data transfer betw een instances in private subnets is free, there wil l be an issue with retrieving the data in Amazon S3. Rememb er that you won't be able to connect to your Amazon S3 bucket if you are using a private subnet unless you have a VPC Endpoint. References: https://aws.amazon.com/ec2/pricing/on-demand/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /concepts.html https://aws.amazon.com/blogs/mt/using-aws-cost-expl orer-to-analyze-data-transfer-costs/ Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": An intelligence agency is currently hosting a learn ing and training portal in AWS. Your manager instru cted you to launch a large EC2 instance with an attached EBS Volume and enable Enhanced Networking. What are the valid case scenarios in using Enhanced Netw orking? (Select TWO.) 3 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "options": [ "A. When you need a low packet-per-second performa nce", "B. When you need a consistently lower inter-insta nce latencies", "C. When you need a dedicated connection to your o n-premises data center", "D. When you need a higher packet per second (PPS) performance" ], "correct": "", "explanation": "Explanation Enhanced networking uses single root I/O virtualiza tion (SR-IOV) to provide high-performance networkin g capabilities on supported instance types. SR-IOV is a method of device virtualization that provides hi gher I/O performance and lower CPU utilization when comp ared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, high er packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networki ng. The option that says: When you need a low packet-pe r-second performance is incorrect because you want to increase packet-per-second performance, and not lower it when you enable enhanced networking. The option that says: When you need high latency ne tworking is incorrect because higher latencies mean slower network, which is the opposite of what you w ant to happen when you enable enhanced networking. The option that says: When you need a dedicated con nection to your on-premises data center is incorrect because enabling enhanced networking does not provide a dedicated connection to your on- premises data center. Use AWS Direct Connect or ena ble VPN tunneling instead for this purpose. Reference: 4 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ enhanced-networking.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A company is using Amazon S3 to store frequently ac cessed data. The S3 bucket is shared with external users that will upload files regularly. A Solutions Architect needs to implement a solution that will grant the bucket owner full access to all uploaded object s in the S3 bucket. What action should be done to achieve this task?", "options": [ "A. Enable the Requester Pays feature in the Amazo n S3 bucket.", "B. Create a bucket policy that will require the u sers to set the object's ACL to bucket-owner-", "C. Create a CORS configuration in the S3 bucket.", "D. Enable server access logging and set up an IAM policy that will require the users to set the" ], "correct": "B. Create a bucket policy that will require the u sers to set the object's ACL to bucket-owner-", "explanation": "Explanation Amazon S3 stores data as objects within buckets. An object is a file and any optional metadata that describes the file. To store a file in Amazon S3, y ou upload it to a bucket. When you upload a file as an object, you can set permissions on the object and any metadata. Buckets are containers for objects. You can have one or mo re buckets. You can control access for each bucket, de ciding who can create, delete, and list objects in it. You can also choose the geographical Region where Amazo n S3 will store the bucket and its contents and vie w access logs for the bucket and its objects. 5 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam By default, an S3 object is owned by the AWS accoun t that uploaded it even though the bucket is owned by another account. To get full access to the object, the object owner must explicitly grant the bucket o wner access. You can create a bucket policy to require e xternal users to grant bucket-owner-full-control wh en uploading objects so the bucket owner can have full access to the objects. Hence, the correct answer is: Create a bucket polic y that will require the users to set the object's A CL to bucket-owner-full-control. The option that says: Enable the Requester Pays fea ture in the Amazon S3 bucket is incorrect because this option won't grant the bucket owner full acces s to the uploaded objects in the S3 bucket. With Requester Pays buckets, the requester, instead of t he bucket owner, pays the cost of the request and t he data download from the bucket. The option that says: Create a CORS configuration i n the S3 bucket is incorrect because this option on ly allows cross-origin access to your Amazon S3 resour ces. If you need to grant the bucket owner full con trol in the uploaded objects, you must create a bucket p olicy and require external users to grant bucket-ow ner- full-control when uploading objects. The option that says: Enable server access logging and set up an IAM policy that will require the user s to set the bucket's ACL to bucket-owner-full-contro l is incorrect because this option only provides detailed records for the requests that are made to a bucket. In addition, the bucket-owner-full-contro l canned ACL must be associated with the bucket polic y and not to an IAM policy. This will require the users to set the object's ACL (not the bucket's) to bucket-owner-full-control. References: https://aws.amazon.com/premiumsupport/knowledge-cen ter/s3-bucket-owner-access/ https://aws.amazon.com//premiumsupport/knowledge-ce nter/s3-require-object-ownership/ 6 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A Solutions Architect designed a real-time data ana lytics system based on Kinesis Data Stream and Lambda. A week after the system has been deployed, the users noticed that it performed slowly as the d ata rate increases. The Architect identified that the p erformance of the Kinesis Data Streams is causing t his problem. Which of the following should the Architect do to i mprove performance?", "options": [ "A. Replace the data stream with Amazon Kinesis Da ta Firehose instead.", "B. Implement Step Scaling to the Kinesis Data Str eam.", "C. Increase the number of shards of the Kinesis s tream by using the UpdateShardCount", "D. Improve the performance of the stream by decre asing the number of its shards using the" ], "correct": "C. Increase the number of shards of the Kinesis s tream by using the UpdateShardCount", "explanation": "Explanation Amazon Kinesis Data Streams supports resharding, wh ich lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream. Resharding is considered an advanced operation. There are two types of resharding operations: shard split and shard merge. In a shard split, you divid e a single shard into two shards. In a shard merge, you combine two shards into a single shard. Resharding is always pairwise in the sense that you cannot split into more than two shards in a single operation, an d you cannot merge more than two shards in a single opera tion. The shard or pair of shards that the reshardi ng operation acts on are referred to as parent shards. The shard or pair of shards that result from the r esharding operation are referred to as child shards. 7 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Splitting increases the number of shards in your st ream and therefore increases the data capacity of t he stream. Because you are charged on a per-shard basi s, splitting increases the cost of your stream. Sim ilarly, merging reduces the number of shards in your stream and therefore decreases the data capacity--and cost--of the stream. If your data rate increases, you can also increase the number of shards allocated to your stream to ma intain the application performance. You can reshard your s tream using the UpdateShardCount API. The throughput of an Amazon Kinesis data stream is desi gned to scale without limits via increasing the num ber of shards within a data stream. Hence, the correct answer is to increase the number of shards of the Kinesis stream by using the UpdateShardCount comman d. Replacing the data stream with Amazon Kinesis Data Firehose instead is incorrect because the throughput of Kinesis Firehose is not exceptionally higher than Kinesis Data Streams. In fact, the throughput of an Amazon Kinesis data stream is desi gned to scale without limits via increasing the num ber of shards within a data stream. Improving the performance of the stream by decreasi ng the number of its shards using the MergeShard command is incorrect because merging the shards will effectively decrease the performance of the stream rather than improve it. Implementing Step Scaling to the Kinesis Data Strea m is incorrect because there is no Step Scaling feature for Kinesis Data Streams. This is only appl icable for EC2. References: https://aws.amazon.com/blogs/big-data/scale-your-am azon-kinesis-stream-capacity-with-updateshardcount/ 8 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://aws.amazon.com/kinesis/data-streams/faqs/ https://docs.aws.amazon.com/streams/latest/dev/kine sis-using-sdk-java-resharding.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/", "references": "" }, { "question": ": A fast food company is using AWS to host their onli ne ordering system which uses an Auto Scaling group of EC2 instances deployed across multiple Availabil ity Zones with an Application Load Balancer in fron t. To better handle the incoming traffic from various digital devices, you are planning to implement a ne w routing system where requests which have a URL of < server>/api/android are forwarded to one specific target group named \"Android-Target-Group\". Converse ly, requests which have a URL of /api/ios are forwarded to another separate target group name d \"iOS-Target-Group\". How can you implement this change in AWS?", "options": [ "A. Use path conditions to define rules that forwa rd requests to different target groups based on", "B. Replace your ALB with a Gateway Load Balancer then use path conditions to define rules", "C. Use host conditions to define rules that forwa rd requests to different target groups based on", "D. Replace your ALB with a Network Load Balancer then use host conditions to define rules" ], "correct": "A. Use path conditions to define rules that forwa rd requests to different target groups based on", "explanation": "Explanation If your application is composed of several individu al services, an Application Load Balancer can route a request to a service based on the content of the re quest such as Host field, Path URL, HTTP header, HT TP method, Query string, or Source IP address. Path-ba sed routing allows you to route a client request ba sed on the URL path of the HTTP header. Each path condi tion has one path pattern. If the URL in a request matches the path pattern in a listener rule exactly , the request is routed using that rule. 9 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A path pattern is case-sensitive, can be up to 128 characters in length, and can contain any of the fo llowing characters. You can include up to three wildcard ch aracters. AZ, az, 09 _ - . $ / ~ \" ' @ : + & (using &) * (matches 0 or more characters) ? (matches exactly 1 character) Example path patterns /img/* /js/* You can use path conditions to define rules that fo rward requests to different target groups based on the URL in the request (also known as path-based routin g). This type of routing is the most appropriate 10 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam solution for this scenario hence, the correct answe r is: Use path conditions to define rules that forw ard requests to different target groups based on the URL in the request. The option that says: Use host conditions to define rules that forward requests to different target groups based on the hostname in the host header. Th is enables you to support multiple domains using a single load balancer is incorrect because h ost-based routing defines rules that forward reques ts to different target groups based on the hostname in th e host header instead of the URL, which is what is needed in this scenario. The option that says: Replace your ALB with a Gatew ay Load Balancer then use path conditions to define rules that forward requests to different tar get groups based on the URL in the request is incorrect because a Gateway Load Balancer does not support path-based routing. You must use an Application Load Balancer. The option that says: Replace your ALB with a Netwo rk Load Balancer then use host conditions to define rules that forward requests to different tar get groups based on the URL in the request is incorrect because a Network Load Balancer is used f or applications that need extreme network performance and static IP. It also does not support path-based routing which is what is needed in this scenario. Furthermore, the statement mentions host- based routing even though the scenario is about pat h- based routing. References: https://docs.aws.amazon.com/elasticloadbalancing/la test/application/introduction.html#application-load - balancer-benefits https://docs.aws.amazon.com/elasticloadbalancing/la test/application/load-balancer-listeners.html#path- conditions Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/ Application Load Balancer vs Network Load Balancer vs Classic Load Balancer: https://tutorialsdojo.com/application-load-balancer -vs-network-load-balancer-vs-classic-load-balancer/", "references": "" }, { "question": ": A website hosted on Amazon ECS container instances loads slowly during peak traffic, affecting its availability. Currently, the container instances ar e run behind an Application Load Balancer, and CloudWatch alarms are configured to send notificati ons to the operations team if there is a problem in 11 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam availability so they can scale out if needed. A sol utions architect needs to create an automatic scali ng solution when such problems occur. Which solution could satisfy the requirement? (Sele ct TWO.)", "options": [ "A. Create an AWS Auto Scaling policy that scales out the ECS cluster when the cluster's CPU", "B. Create an AWS Auto Scaling policy that scales out the ECS service when the ALB hits a", "C. Create an AWS Auto Scaling policy that scales out an ECS service when the ALB endpoint", "D. Create an AWS Auto Scaling policy that scales out the ECS service when the service's" ], "correct": "", "explanation": "Explanation AWS Auto Scaling monitors your applications and aut omatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost . Using AWS Auto Scaling, it's easy to set up application scaling for multiple resources across m ultiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. In this scenario, you can set up a scaling policy t hat triggers a scale-out activity to an ECS service or ECS container instance based on the metric that you pre fer. The following metrics are available for instances: CPU Utilization Disk Reads Disk Read Operations Disk Writes Disk Write Operations 12 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Network In Network Out Status Check Failed (Any) Status Check Failed (Instance) Status Check Failed (System) The following metrics are available for ECS Service : ECSServiceAverageCPUUtilization--Average CPU utiliz ation of the service. ECSServiceAverageMemoryUtilization--Average memory utilization of the service. ALBRequestCountPerTarget--Number of requests comple ted per target in an Application Load Balancer target group. Hence, the correct answers are: - Create an AWS Auto scaling policy that scales out the ECS service when the service's memory utilization is too high. - Create an AWS Auto scaling policy that scales out the ECS cluster when the cluster's CPU utilization is too high. The option that says: Create an AWS Auto scaling po licy that scales out an ECS service when the ALB endpoint becomes unreachable is incorrect. This wou ld be a different problem that needs to be addresse d differently if this is the case. An unreachable ALB endpoint could mean other things like a misconfigu red security group or network access control lists. The option that says: Create an AWS Auto scaling po licy that scales out the ECS service when the ALB hits a high CPU utilization is incorrect. ALB i s a managed resource. You cannot track nor view its resource utilization. The option that says: Create an AWS Auto scaling po licy that scales out the ECS cluster when the ALB target group's CPU utilization is too high is i ncorrect. AWS Auto Scaling does not support this metric for ALB. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/service-configure-auto-scaling.html 13 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-instance-monitoring.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A disaster recovery team is planning to back up on- premises records to a local file server share throu gh SMB protocol. To meet the company's business contin uity plan, the team must ensure that a copy of data from 48 hours ago is available for immediate access . Accessing older records with delay is tolerable. Which should the DR team implement to meet the obje ctive with the LEAST amount of configuration effort?", "options": [ "A. Use an AWS Storage File gateway with enough st orage to keep data from the last 48 hours.", "B. Create an AWS Backup plan to copy data backups to a local SMB share every 48 hours.", "C. Mount an Amazon EFS file system on the on-prem ises client and copy all backups to an", "D. Create an SMB file share in Amazon FSx for Win dows File Server that has enough storage" ], "correct": "", "explanation": "Explanation Amazon S3 File Gateway presents a file interface th at enables you to store files as objects in Amazon S3 using the industry-standard NFS and SMB file protoc ols, and access those files via NFS and SMB from your data center or Amazon EC2, or access those fil es as objects directly in Amazon S3. 14 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam When you deploy File Gateway, you specify how much disk space you want to allocate for local cache. This local cache acts as a buffer for writes and pr ovides low latency access to data that was recently written to or read from Amazon S3. When a client writes dat a to a file via File Gateway, that data is first wr itten to the local cache disk on the gateway itself. Once th e data has been safely persisted to the local cache , only then does the File Gateway acknowledge the write ba ck to the client. From there, File Gateway transfer s the data to the S3 bucket asynchronously in the bac kground, optimizing data transfer using multipart parallel uploads, and encrypting data in transit us ing HTTPS. In this scenario, you can deploy an AWS Storage Fil e Gateway to the on-premises client. After activati ng the File Gateway, create an SMB share and mount it as a local disk at the on-premises end. Copy the backups to the SMB share. You must ensure that you size the File Gateway's local cache appropriately t o the backup data that needs immediate access. After the backup is done, you will be able to access the older data but with a delay. There will be a small delay since data (not in cache) needs to be retrieved fro m Amazon S3. Hence, the correct answer is: Use an AWS Storage Fi le gateway with enough storage to keep data from the last 48 hours. Send the backups to an SMB share mounted as a local disk. The option that says: Create an SMB file share in A mazon FSx for Windows File Server that has enough storage to store all backups. Access the fil e share from on-premises is incorrect because this requires additional setup. You need to set up a Dir ect Connect or VPN connection from on-premises to AWS first in order for this to work. The option that says: Mount an Amazon EFS file syst em on the on-premises client and copy all backups to an NFS share is incorrect because the fi le share required in the scenario needs to be using the SMB protocol. 15 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Create an AWS Backup plan to copy data backups to a local SMB share every 48 hours is incorrect. AWS Backup only works on AWS re sources. References: https://aws.amazon.com/blogs/storage/easily-store-y our-sql-server-backups-in-amazon-s3-using-file- gateway/ https://aws.amazon.com/storagegateway/faqs/ AWS Storage Gateway Overview: https://www.youtube.com/watch?v=pNb7xOBJjHE Check out this AWS Storage Gateway Cheat Sheet: https://tutorialsdojo.com/aws-storage-gateway/", "references": "" }, { "question": ": An application is using a Lambda function to proces s complex financial data that run for 15 minutes on average. Most invocations were successfully process ed. However, you noticed that there are a few terminated invocations throughout the day, which ca used data discrepancy in the application. Which of the following is the most likely cause of this issue?", "options": [ "A. The failed Lambda functions have been running for over 15 minutes and reached the", "B. The Lambda function contains a recursive code and has been running for over 15 minutes.", "C. The concurrent execution limit has been reache d.", "D. The failed Lambda Invocations contain a Servic eException error which means that the" ], "correct": "A. The failed Lambda functions have been running for over 15 minutes and reached the", "explanation": "Explanation A Lambda function consists of code and any associat ed dependencies. In addition, a Lambda function als o has configuration information associated with it. I nitially, you specify the configuration information when you create a Lambda function. Lambda provides an AP I for you to update some of the configuration data. 16 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam You pay for the AWS resources that are used to run your Lambda function. To prevent your Lambda function from running indefinitely, you specify a t imeout. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function . It is recommended that you set this value based o n your expected execution time. The default timeout i s 3 seconds and the maximum execution duration per request in AWS Lambda is 900 seconds, which is equi valent to 15 minutes. Hence, the correct answer is the option that says: The failed Lambda functions have been running for over 15 minutes and reached the maximum execution t ime. Take note that you can invoke a Lambda function syn chronously either by calling the Invoke operation o r by using an AWS SDK in your preferred runtime. If y ou anticipate a long-running Lambda function, your client may time out before function execution compl etes. To avoid this, update the client timeout or y our SDK configuration. The option that says: The concurrent execution limi t has been reached is incorrect because, by default , the AWS Lambda limits the total concurrent executio ns across all functions within a given region to 10 00. By setting a concurrency limit on a function, Lambd a guarantees that allocation will be applied specif ically to that function, regardless of the amount of traff ic processing the remaining functions. If that limi t is exceeded, the function will be throttled but not te rminated, which is in contrast with what is happeni ng in the scenario. 17 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: The Lambda function contains a recursive code and has been running for over 15 minutes is incorrect because having a recursive code in your Lambda function does not directly resu lt to an abrupt termination of the function execution. Th is is a scenario wherein the function automatically calls itself until some arbitrary criteria is met. This c ould lead to an unintended volume of function invoc ations and escalated costs, but not an abrupt termination because Lambda will throttle all invocations to the function. The option that says: The failed Lambda Invocations contain a ServiceException error which means that the AWS Lambda service encountered an internal error is incorrect because although this is a valid root cause, it is unlikely to have several Se rviceException errors throughout the day unless the re is an outage or disruption in AWS. Since the scenario says that the Lambda function runs for about 10 to 15 minutes, the maximum execution duration is the most likely cause of the issue and not the AWS Lambda service encountering an internal error. References: https://docs.aws.amazon.com/lambda/latest/dg/limits .html https://docs.aws.amazon.com/lambda/latest/dg/resour ce-model.html AWS Lambda Overview - Serverless Computing in AWS: https://www.youtube.com/watch?v=bPVX1zHwAnY Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/", "references": "" }, { "question": ": A company launched a cryptocurrency mining server o n a Reserved EC2 instance in us-east-1 region's private subnet that uses IPv6. Due to the financial data that the server contains, the system should b e secured to prevent any unauthorized access and to m eet the regulatory compliance requirements. In this scenario, which VPC feature allows the EC2 instance to communicate to the Internet but prevent s inbound traffic?", "options": [ "A. Egress-only Internet gateway", "B. NAT Gateway", "C. NAT instances", "D. Internet Gateway" ], "correct": "A. Egress-only Internet gateway", "explanation": "Explanation An egress-only Internet gateway is a horizontally s caled, redundant, and highly available VPC componen t that allows outbound communication over IPv6 from i nstances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection wit h your instances. Take note that an egress-only Internet gateway is f or use with IPv6 traffic only. To enable outbound-o nly Internet communication over IPv4, use a NAT gateway instead. Hence, the correct answer is: Egress-only Internet gateway. NAT Gateway and NAT instances are incorrect because these are only applicable for IPv4 and not IPv6. Even though these two components can enable the EC2 instance in a private subnet to communicate to the Internet and prevent inbound traffic, it is only li mited to instances which are using IPv4 addresses a nd not IPv6. The most suitable VPC component to use is the egress-only Internet gateway. Internet Gateway is incorrect because this is prima rily used to provide Internet access to your instan ces in the public subnet of your VPC, and not for private subnets. However, with an Internet gateway, traffic 19 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam originating from the public Internet will also be a ble to reach your instances. The scenario is asking you to prevent inbound access, so this is not the correct answer.", "references": "https://docs.aws.amazon.com/vpc/latest/userguide/eg ress-only-internet-gateway.html Amazon VPC Overview: https://www.youtube.com/watch?v=oIDHKeNxvQQ Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/" }, { "question": ": A multinational corporate and investment bank is re gularly processing steady workloads of accruals, lo an interests, and other critical financial calculation s every night from 10 PM to 3 AM on their on-premis es data center for their corporate clients. Once the p rocess is done, the results are then uploaded to th e Oracle General Ledger which means that the processing shou ld not be delayed or interrupted. The CTO has decided to move its IT infrastructure to AWS to sav e costs. The company needs to reserve compute capacity in a specific Availability Zone to properl y run their workloads. As the Senior Solutions Architect, how can you impl ement a cost-effective architecture in AWS for thei r financial system?", "options": [ "A. Use Dedicated Hosts which provide a physical h ost that is fully dedicated to running your", "B. Use On-Demand EC2 instances which allows you t o pay for the instances that you launch", "C. Use Regional Reserved Instances to reserve cap acity on a specific Availability Zone and", "D. Use On-Demand Capacity Reservations, which pro vide compute capacity that is always" ], "correct": "D. Use On-Demand Capacity Reservations, which pro vide compute capacity that is always", "explanation": "Explanation 20 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam On-Demand Capacity Reservations enable you to reser ve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any d uration. This gives you the ability to create and m anage Capacity Reservations independently from the billin g discounts offered by Savings Plans or Regional Reserved Instances. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you nee d it, for as long as you need it. You can create Capa city Reservations at any time, without entering int o a one- year or three-year term commitment, and the capacit y is available immediately. Billing starts as soon as the capacity is provisioned and the Capacity Reservatio n enters the active state. When you no longer need it, cancel the Capacity Reservation to stop incurring c harges. When you create a Capacity Reservation, you specify : - The Availability Zone in which to reserve the cap acity - The number of instances for which to reserve capa city - The instance attributes, including the instance t ype, tenancy, and platform/OS Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don't have any running insta nces that match the attributes of the Capacity Reservati on, it remains unused until you launch an instance with matching attributes. In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservationsto benefit from billing discounts. AWS automaticall y applies your discount when the attributes of a Capacity Reservation match the attributes of a Savi ngs Plan or Regional Reserved Instance. Hence, the correct answer is to use On-Demand Capac ity Reservations, which provide compute capacity that is always available on the specified recurring schedule. Using On-Demand EC2 instances which allows you to p ay for the instances that you launch and use by the second. Reserve compute capacity in a specif ic Availability Zone to avoid any interruption is 21 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam incorrect because although an On-Demand instance is stable and suitable for processing critical data, it costs more than any other option. Moreover, the cri tical financial calculations are only done every ni ght from 10 PM to 3 AM only and not 24/7. This means th at your compute capacity will not be utilized for a total of 19 hours every single day. On-Demand insta nces cannot reserve compute capacity at all. So thi s option is incorrect. Using Regional Reserved Instances to reserve capaci ty on a specific Availability Zone and lower down the operating cost through its billing discoun ts is incorrect because this feature is available i n Zonal Reserved Instances only and not on Regional R eserved Instances. Using Dedicated Hosts which provide a physical host that is fully dedicated to running your instances, and bringing your existing per-socket, per-core, or per-VM software licenses to reduce costs is incorrect because the use of a fully dedicated phys ical host is not warranted in this scenario. Moreov er, this will be underutilized since you only run the proces s for 5 hours (from 10 PM to 3 AM only), wasting 19 hours of compute capacity every single day. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-capacity-reservations.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /instance-purchasing-options.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": A Solutions Architect needs to set up the required compute resources for the application which have workloads that require high, sequential read and wr ite access to very large data sets on local storage . Which of the following instance type is the most su itable one to use in this scenario?", "options": [ "A. Compute Optimized Instances", "B. Memory Optimized Instances", "C. General Purpose Instances", "D. Storage Optimized Instances" ], "correct": "D. Storage Optimized Instances", "explanation": "Explanation 22 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Storage optimized instances are designed for worklo ads that require high, sequential read and write ac cess to very large data sets on local storage. They are optimized to deliver tens of thousands of low-laten cy, random I/O operations per second (IOPS) to applicat ions. Hence, the correct answer is: Storage Optimized Ins tances. Memory Optimized Instances is incorrect because the se are designed to deliver fast performance for workloads that process large data sets in memory, w hich is quite different from handling high read and write capacity on local storage. 23 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Compute Optimized Instances is incorrect because th ese are ideal for compute-bound applications that benefit from high-performance processors, such as b atch processing workloads and media transcoding. General Purpose Instances is incorrect because thes e are the most basic type of instances. They provid e a balance of compute, memory, and networking resource s, and can be used for a variety of workloads. Sinc e you are requiring higher read and write capacity, s torage optimized instances should be selected inste ad.", "references": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /storage-optimized-instances.html Amazon EC2 Overview: https://www.youtube.com/watch?v=7VsGIHT_jQE Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" }, { "question": ": A Solutions Architect is developing a three-tier cr yptocurrency web application for a FinTech startup. The Architect has been instructed to restrict access to the database tier to only accept traffic from the application-tier and deny traffic from other source s. The application-tier is composed of application servers hosted in an Auto Scaling group of EC2 instances. Which of the following options is the MOST suitable solution to implement in this scenario?", "options": [ "A. Set up the Network ACL of the database subnet to deny all inbound non-database traffic", "B. Set up the security group of the database tier to allow database traffic from a specified list", "C. Set up the security group of the database tier to allow database traffic from the security", "D. Set up the Network ACL of the database subnet to allow inbound database traffic from the" ], "correct": "C. Set up the security group of the database tier to allow database traffic from the security", "explanation": "Explanation 24 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A security group acts as a virtual firewall for you r instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security g roups act at the instance level, not the subnet level. Th erefore, each instance in a subnet in your VPC coul d be assigned to a different set of security groups. If you don't specify a particular group at launch time , the instance is automatically assigned to the default s ecurity group for the VPC. For each security group, you add rules that control the inbound traffic to instances, and a separate s et of rules that control the outbound traffic. This secti on describes the basic things you need to know abou t security groups for your VPC and their rules. You can add or remove rules for a security group wh ich is also referred to as authorizing or revoking inbound or outbound access. A rule applies either t o inbound traffic (ingress) or outbound traffic (eg ress). You can grant access to a specific CIDR range, or t o another security group in your VPC or in a peer V PC (requires a VPC peering connection). 25 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam In the scenario, the servers of the application-tie r are in an Auto Scaling group which means that the number of EC2 instances could grow or shrink over t ime. An Auto Scaling group could also cover one or more Availability Zones (AZ) which have their own s ubnets. Hence, the most suitable solution would be to set up the security group of the database tier to a llow database traffic from the security group of th e application servers since you can utilize the secur ity group of the application-tier Auto Scaling grou p as the source for the security group rule in your data base tier. Setting up the security group of the database tier to allow database traffic from a specified list of application server IP addresses is incorrect becaus e the list of application server IP addresses will change over time since an Auto Scaling group can add or re move EC2 instances based on the configured scaling policy. This will create inconsistencies in your ap plication because the newly launched instances, whi ch are not included in the initial list of IP addresses, w ill not be able to access the database. Setting up the Network ACL of the database subnet t o deny all inbound non-database traffic from the subnet of the application-tier is incorrect bec ause doing this could affect the other EC2 instance s of other applications, which are also hosted in the sa me subnet of the application-tier. For example, a l arge subnet with a CIDR block of /16 could be shared by several applications. Denying all inbound non- database traffic from the entire subnet will impact other applications which use this subnet. Setting up the Network ACL of the database subnet t o allow inbound database traffic from the subnet of the application-tier is incorrect because although this solution can work, the subnet of the application-tier could be shared by another tier or another set of EC2 instances other than the applic ation- tier. This means that you would inadvertently be gr anting database access to unauthorized servers host ed in the same subnet other than the application-tier. References: https://docs.aws.amazon.com/vpc/latest/userguide/VP C_Security.html#VPC_Security_Comparison http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_SecurityGroups.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A Solutions Architect needs to launch a web applica tion that will be served globally using Amazon CloudFront. The application is hosted in an Amazon EC2 instance which will be configured as the origin server to process and serve dynamic content to its customers. Which of the following options provides high availa bility for the application? 26 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "options": [ "A. Launch an Auto Scaling group of EC2 instances and configure it to be part of an origin", "B. Use Lambda@Edge to improve the performance of your web application and ensure high", "C. Use Amazon S3 to serve the dynamic content of your web application and configure the S3", "D. Provision two EC2 instances deployed in differ ent Availability Zones and configure them to" ], "correct": "D. Provision two EC2 instances deployed in differ ent Availability Zones and configure them to", "explanation": "Explanation An origin is a location where content is stored, an d from which CloudFront gets content to serve to vi ewers. Amazon CloudFront is a service that speeds up the d istribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locat ions. When a user requests content that you're serv ing with CloudFront, the user is routed to the edge loc ation that provides the lowest latency (time delay) , so that content is delivered with the best possible pe rformance. You can also set up CloudFront with origin failover for scenarios that require high availability. An o rigin group may contain two origins: a primary and a seco ndary. If the primary origin is unavailable or retu rns 27 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin. To set up origin failover, you mu st have a distribution with at least two origins. The scenario uses an EC2 instance as an origin. Tak e note that we can also use an EC2 instance or a cu stom origin in configuring CloudFront. To achieve high a vailability in an EC2 instance, we need to deploy t he instances in two or more Availability Zones. You al so need to configure the instances to be part of th e origin group to ensure that the application is high ly available. Hence, the correct answer is: Provision two EC2 ins tances deployed in different Availability Zones and configure them to be part of an origin group. The option that says: Use Amazon S3 to serve the dy namic content of your web application and configure the S3 bucket to be part of an origin gro up is incorrect because Amazon S3 can only serve static content. If you need to host dynamic content , you have to use an Amazon EC2 instance instead. The option that says: Launch an Auto Scaling group of EC2 instances and configure it to be part of an origin group is incorrect because you must have at least two origins to set up an origin failover in CloudFront. In addition, you can't directly use a s ingle Auto Scaling group as an origin. The option that says: Use Lambda@Edge to improve th e performance of your web application and ensure high availability. Set the Lambda@Edge funct ions to be part of an origin group is incorrect because Lambda@Edge is primarily used for serverles s edge computing. You can't set Lambda@Edge functions as part of your origin group in CloudFron t. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/high_availability_origin_failover. html https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/Introduction.html https://aws.amazon.com/cloudfront/faqs/ Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/", "references": "" }, { "question": ": A multinational company has been building its new d ata analytics platform with high-performance computing workloads (HPC) which requires a scalable , POSIX-compliant storage service. The data need to be stored redundantly across multiple AZs and allow s concurrent connections from thousands of EC2 instances hosted on multiple Availability Zones. 28 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Which of the following AWS storage service is the m ost suitable one to use in this scenario?", "options": [ "A. Amazon S3", "B. Amazon EBS Volumes", "C. Amazon Elastic File System", "D. Amazon ElastiCache" ], "correct": "C. Amazon Elastic File System", "explanation": "Explanation In this question, you should take note of this phra se, \"allows concurrent connections from multiple EC 2 instances\". There are various AWS storage options t hat you can choose but whenever these criteria show up, always consider using EFS instead of using EBS Volumes which is mainly used as a \"block\" storage and can only have one connection to one EC2 instanc e at a time. Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazo n Cloud. With a few clicks in the AWS Management Cons ole, you can create file systems that are accessibl e to Amazon EC2 instances via a file system interface (using standard operating system file I/O APIs) an d supports full file system access semantics (such as strong consistency and file locking). Amazon EFS file systems can automatically scale fro m gigabytes to petabytes of data without needing to provision storage. Tens, hundreds, or even thousand s of Amazon EC2 instances can access an Amazon EFS file system at the same time, and Amazon EFS provid es consistent performance to each Amazon EC2 instance. Amazon EFS is designed to be highly durab le and highly available. References: https://docs.aws.amazon.com/efs/latest/ug/performan ce.html https://aws.amazon.com/efs/faq/ Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/ Here's a short video tutorial on Amazon EFS: https://youtu.be/AvgAozsfCrY", "references": "" }, { "question": ": 29 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A company requires corporate IT governance and cost oversight of all of its AWS resources across its divisions around the world. Their corporate divisio ns want to maintain administrative control of the d iscrete AWS resources they consume and ensure that those re sources are separate from other divisions. Which of the following options will support the aut onomy of each corporate division while enabling the corporate IT to maintain governance and cost oversi ght? (Select TWO.)", "options": [ "A. Use AWS Trusted Advisor and AWS Resource Group s Tag Editor", "B. Create separate VPCs for each division within the corporate IT AWS account. Launch an", "C. Use AWS Consolidated Billing by creating AWS O rganizations to link the divisions' accounts to a parent corporate account.", "D. Create separate Availability Zones for each di vision within the corporate IT AWS account." ], "correct": "", "explanation": "Explanation You can use an IAM role to delegate access to resou rces that are in different AWS accounts that you ow n. You share resources in one account with users in a different account. By setting up cross-account acce ss in this way, you don't need to create individual IAM u sers in each account. In addition, users don't have to sign out of one account and sign into another in or der to access resources that are in different AWS accounts. 30 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam You can use the consolidated billing feature in AWS Organizations to consolidate payment for multiple AWS accounts or multiple AISPL accounts. With conso lidated billing, you can see a combined view of AWS charges incurred by all of your accounts. You c an also get a cost report for each member account t hat is associated with your master account. Consolidate d billing is offered at no additional charge. AWS a nd AISPL accounts can't be consolidated together. The combined use of IAM and Consolidated Billing wi ll support the autonomy of each corporate division while enabling corporate IT to maintain governance and cost oversight. Hence, the correct choices are: - Enable IAM cross-account access for all corporate IT administrators in each child account - Use AWS Consolidated Billing by creating AWS Orga nizations to link the divisions' accounts to a parent corporate account Using AWS Trusted Advisor and AWS Resource Groups T ag Editor is incorrect. Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS be st practices. It only provides you alerts on areas whe re you do not adhere to best practices and tells yo u how to improve them. It does not assist in maintaining governance over your AWS accounts. Additionally, th e AWS Resource Groups Tag Editor simply allows you to add, edit, and delete tags to multiple AWS resources at once for easier identification and mon itoring. 31 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Creating separate VPCs for each division within the corporate IT AWS account. Launch an AWS Transit Gateway with equal-cost multipath routing ( ECMP) and VPN tunnels for intra-VPC communication is incorrect because creating separat e VPCs would not separate the divisions from each other since they will still be operating under the same account and therefore contribute to the same b illing each month. AWS Transit Gateway connects VPCs and o n-premises networks through a central hub and acts as a cloud router where each new connection is only made once. For this particular scenario, it i s suitable to use AWS Organizations instead of settin g up an AWS Transit Gateway since the objective is for maintaining administrative control of the AWS resou rces and not for network connectivity. Creating separate Availability Zones for each divis ion within the corporate IT AWS account. Improve communication between the two AZs using the AWS Global Accelerator is incorrect because you do not need to create Availability Zones. They are already provided for you by AWS right from the start, and not all services support multiple AZ dep loyments. In addition, having separate Availability Zones in your VPC does not meet the requirement of suppor ting the autonomy of each corporate division. The AWS Global Accelerator is a service that uses the A WS global network to optimize the network path from your users to your applications and not between you r Availability Zones. References: http://docs.aws.amazon.com/awsaccountbilling/latest /aboutv2/consolidated-billing.html https://docs.aws.amazon.com/IAM/latest/UserGuide/tu torial_cross-account-with-roles.html Check out this AWS Billing and Cost Management Chea t Sheet: https://tutorialsdojo.com/aws-billing-and-cost-mana gement/", "references": "" }, { "question": ": A game company has a requirement of load balancing the incoming TCP traffic at the transport level (Layer 4) to their containerized gaming servers hos ted in AWS Fargate. To maintain performance, it sho uld handle millions of requests per second sent by game rs around the globe while maintaining ultra-low latencies. Which of the following must be implemented in the c urrent architecture to satisfy the new requirement?", "options": [ "A. Launch a new microservice in AWS Fargate that acts as a load balancer since using an ALB", "B. Create a new record in Amazon Route 53 with We ighted Routing policy to load balance the", "C. Launch a new Application Load Balancer.", "D. Launch a new Network Load Balancer." ], "correct": "D. Launch a new Network Load Balancer.", "explanation": "Explanation Elastic Load Balancing automatically distributes in coming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying loa d of your application traffic in a single Availabilit y Zone or across multiple Availability Zones. Elast ic Load Balancing offers three types of load balancers that all feature the high availability, automatic scali ng, and robust security necessary to make your applications fault-tolerant. They are: Application Load Balance r, Network Load Balancer, and Classic Load Balancer Network Load Balancer is best suited for load balan cing of TCP traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) an d is capable of handling millions of requests per second while maintaining ultra-low latencies. Netwo rk Load Balancer is also optimized to handle sudden and volatile traffic patterns. Hence, the correct answer is to launch a new Networ k Load Balancer. The option that says: Launch a new Application Load Balancer is incorrect because it cannot handle TCP or Layer 4 connections, only Layer 7 (HTTP and HTTP S). The option that says: Create a new record in Amazon Route 53 with Weighted Routing policy to load balance the incoming traffic is incorrect because a lthough Route 53 can act as a load balancer by assi gning each record a relative weight that corresponds to h ow much traffic you want to send to each resource, it is still not capable of handling millions of requests per second while maintaining ultra-low latencies. Y ou have to use a Network Load Balancer instead. 33 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Launch a new microservice in AWS Fargate that acts as a load balancer since using an ALB or NLB with Fargate is not possible is incorrect because you can place an ALB and NLB in front of your AWS Fargate cluster. References: https://aws.amazon.com/elasticloadbalancing/feature s/#compare https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/load-balancer-types.html https://aws.amazon.com/getting-started/projects/bui ld-modern-app-fargate-lambda-dynamodb- python/module-two/ Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", "references": "" }, { "question": ": A tech company is running two production web server s hosted on Reserved EC2 instances with EBS- backed root volumes. These instances have a consist ent CPU load of 90%. Traffic is being distributed t o these instances by an Elastic Load Balancer. In add ition, they also have Multi-AZ RDS MySQL databases for their production, test, and development environ ments. What recommendation would you make to reduce cost i n this AWS environment without affecting availability and performance of mission-critical sy stems? Choose the best answer.", "options": [ "A. Consider using On-demand instances instead of Reserved EC2 instances", "B. Consider using Spot instances instead of reser ved EC2 instances", "C. Consider not using a Multi-AZ RDS deployment f or the development and test database", "D. Consider removing the Elastic Load Balancer" ], "correct": "C. Consider not using a Multi-AZ RDS deployment f or the development and test database", "explanation": "Explanation One thing that you should notice here is that the c ompany is using Multi-AZ databases in all of their environments, including their development and test environment. This is costly and unnecessary as thes e two environments are not critical. It is better to use Multi-AZ for production environments to reduce costs, which is why the option that says: Consider not usi ng a Multi-AZ RDS deployment for the development and test database is the correct answer . 34 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Consider using On-demand inst ances instead of Reserved EC2 instances is incorrect because selecting Reserved instances is c heaper than On-demand instances for long term usage due to the discounts offered when purchasing reserv ed instances. The option that says: Consider using Spot instances instead of reserved EC2 instances is incorrect because the web servers are running in a production environment. Never use Spot instances for producti on level web servers unless you are sure that they are not that critical in your system. This is because your spot instances can be terminated once the maximum price goes over the maximum amount that you specified. The option that says: Consider removing the Elastic Load Balancer is incorrect because the Elastic Loa d Balancer is crucial in maintaining the elasticity a nd reliability of your system. References: https://aws.amazon.com/rds/details/multi-az/ https://aws.amazon.com/pricing/cost-optimization/ Amazon RDS Overview: https://www.youtube.com/watch?v=aZmpLl8K1UU Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A Solutions Architect is managing a three-tier web application that processes credit card payments and online transactions. Static web pages are used on t he front-end tier while the application tier contai ns a single Amazon EC2 instance that handles long-runnin g processes. The data is stored in a MySQL database . The Solutions Architect is instructed to decouple t he tiers to create a highly available application. Which of the following options can satisfy the give n requirement?", "options": [ "A. Move all the static assets and web pages to Am azon S3. Re-host the application to Amazon", "B. Move all the static assets, web pages, and the backend application to a larger instance. Use", "C. Move all the static assets to Amazon S3. Set c oncurrency limit in AWS Lambda to move the", "D. Move all the static assets and web pages to Am azon CloudFront. Use Auto Scaling in" ], "correct": "A. Move all the static assets and web pages to Am azon S3. Re-host the application to Amazon", "explanation": "Explanation Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS makes it easy to u se containers as a building block for your applications by eliminating the need for you to ins tall, operate, and scale your own cluster managemen t infrastructure. Amazon ECS lets you schedule long-r unning applications, services, and batch processes using Docker containers. Amazon ECS maintains appli cation availability and allows you to scale your containers up or down to meet your application's ca pacity requirements. The requirement in the scenario is to decouple the services to achieve a highly available architecture . To accomplish this requirement, you must move the exis ting set up to each AWS services. For static assets , you should use Amazon S3. You can use Amazon ECS fo r your web application and then migrate the database to Amazon RDS with Multi-AZ deployment. Decoupling you r app with application integration services allows them to remain interoperable, but if one ser vice has a failure or spike in workload, it won't a ffect the rest of them. Hence, the correct answer is: Move all the static a ssets and web pages to Amazon S3. Re-host the application to Amazon Elastic Container Service (Am azon ECS) containers and enable Service Auto Scaling. Migrate the database to Amazon RDS with Mu lti-AZ deployments configuration. 36 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Move all the static assets to Amazon S3. Set concurrency limit in AWS Lambda to move the application to a serverless architectur e. Migrate the database to Amazon DynamoDB is incorrect because Lambda functions can't process lo ng-running processes. Take note that a Lambda function has a maximum processing time of 15 minute s. The option that says: Move all the static assets, w eb pages, and the backend application to a larger instance. Use Auto Scaling in Amazon EC2 instance. Migrate the database to Amazon Aurora is incorrect because static assets are more suitable a nd cost-effective to be stored in S3 instead of sto ring them in an EC2 instance. The option that says: Move all the static assets an d web pages to Amazon CloudFront. Use Auto Scaling in Amazon EC2 instance. Migrate the databas e to Amazon RDS with Multi-AZ deployments configuration is incorrect because you can't store data in Amazon CloudFront. Technically, you only st ore cache data in CloudFront, but you can't host applic ations or web pages using this service. You have to use Amazon S3 to host the static web pages and use Clou dFront as the CDN. References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/service-auto-scaling.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Concepts.MultiAZ.html Check out this Amazon ECS Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-container- service-amazon-ecs/", "references": "" }, { "question": ": A company plans to use a cloud storage service to t emporarily store its log files. The number of files to be stored is still unknown, but it only needs to be ke pt for 12 hours. Which of the following is the most cost-effective s torage class to use in this scenario? A. Amazon S3 Standard-IA", "options": [ "B. Amazon S3 One Zone-IA", "C. Amazon S3 Standard", "D. Amazon S3 Glacier Deep Archive" ], "correct": "C. Amazon S3 Standard", "explanation": "Explanation 37 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Amazon Simple Storage Service (Amazon S3) is an obj ect storage service that offers industry-leading scalability, data availability, security, and perfo rmance. Amazon S3 also offers a range of storage cl asses for the objects that you store. You choose a class depending on your use case scenario and performance access requirements. All of these storage classes o ffer high durability. The scenario requires you to select a cost-effectiv e service that does not have a minimum storage duration since the data will only last for 12 hours . Among the options given, only Amazon S3 Standard has the feature of no minimum storage duration. It is also the most cost-effective storage service bec ause you will only be charged for the last 12 hours, unl ike in other storage classes where you will still b e charged based on its respective storage duration (e .g. 30 days, 90 days, 180 days). S3 Intelligent-Tie ring also has no minimum storage duration and this is de signed for data with changing or unknown access patters. 38 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam S3 Standard-IA is designed for long-lived but infre quently accessed data that is retained for months o r years. Data that is deleted from S3 Standard-IA wit hin 30 days will still be charged for a full 30 day s. S3 Glacier Deep Archive is designed for long-lived but rarely accessed data that is retained for 7-10 years or more. Objects that are archived to S3 Glacier De ep Archive have a minimum of 180 days of storage, a nd objects deleted before 180 days incur a pro-rated c harge equal to the storage charge for the remaining days. Hence, the correct answer is: Amazon S3 Standard. Amazon S3 Standard-IA is incorrect because this sto rage class has a minimum storage duration of at lea st 30 days. Remember that the scenario requires the da ta to be kept for 12 hours only. Amazon S3 One Zone-IA is incorrect. Just like S3 St andard-IA, this storage class has a minimum storage duration of at least 30 days. Amazon S3 Glacier Deep Archive is incorrect. Althou gh it is the most cost-effective storage class amon g all other options, it has a minimum storage duratio n of at least 180 days which is only suitable for b ackup and data archival. If you store your data in Glacie r Deep Archive for only 12 hours, you will still be charged for the full 180 days. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/sto rage-class-intro.html https://aws.amazon.com/s3/storage-classes/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/ S3 Standard vs S3 Standard-IA vs S3 One Zone-IA Che at Sheet: https://tutorialsdojo.com/s3-standard-vs-s3-standar d-ia-vs-s3-one-zone-ia/", "references": "" }, { "question": ": A company created a VPC with a single subnet then l aunched an On-Demand EC2 instance in that subnet. You have attached an Internet gateway (IGW) to the VPC and verified that the EC2 instance has a public IP. The main route table of the VPC is as shown below: 39 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam However, the instance still cannot be reached from the Internet when you tried to connect to it from y our computer. Which of the following should be made to the route table to fix this issue?", "options": [ "A. Modify the above route table: 10.0.0.0/27 -> Y our Internet Gateway", "B. Add the following entry to the route table: 10.0 .0.0/27 -> Your Internet Gateway C. Add this new entry to the route table: 0.0.0.0/2 7 -> Your Internet Gateway", "D. Add this new entry to the route table: 0.0.0.0 /0 -> Your Internet Gateway" ], "correct": "D. Add this new entry to the route table: 0.0.0.0 /0 -> Your Internet Gateway", "explanation": "Explanation Apparently, the route table does not have an entry for the Internet Gateway. This is why you cannot connect to the EC2 instance. To fix this, you have to add a route with a destination of 0.0.0.0/0 for IPv4 traffic or ::/0 for IPv6 traffic, and then a target of the Internet gateway ID (igw-xxxxxxxx). This should be the correct route table configuratio n after adding the new entry. 40 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGui de/VPC_Route_Tables.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A large Philippine-based Business Process Outsourci ng company is building a two-tier web application in their VPC to serve dynamic transacti on-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database but for the web tier, they are still deciding what service they will use. What AWS services should you leverage to build an elastic and scalable web tier ?", "options": [ "A. Amazon RDS with Multi-AZ and Auto Scaling", "B. Elastic Load Balancing, Amazon EC2, and Auto Scal ing", "C. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3", "D. Amazon EC2, Amazon DynamoDB, and Amazon S3" ], "correct": "C. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3", "explanation": "Explanation Amazon RDS is a suitable database service for onlin e transaction processing (OLTP) applications. However, the question asks for a list of AWS servic es for the web tier and not the database tier. Also , when it comes to services providing scalability and elas ticity for your web tier, you should always conside r using Auto Scaling and Elastic Load Balancer. To build an elastic and a highly-available web tier , you can use Amazon EC2, Auto Scaling, and Elastic Load Balancing. You can deploy your web servers on a fleet of EC2 instances to an Auto Scaling group, which will automatically monitor your applications and automatically adjust capacity to maintain stead y, predictable performance at the lowest possible cost . Load balancing is an effective way to increase th e availability of a system. Instances that fail can b e replaced seamlessly behind the load balancer whil e other instances continue to operate. Elastic Load Balanci ng can be used to balance across instances in multi ple availability zones of a region. The rest of the options are all incorrect since the y don't mention all of the required services in bui lding a highly available and scalable web tier, such as EC2 , Auto Scaling, and Elastic Load Balancer. AlthoughAmazon RDS with Multi-AZ and DynamoDB are highly sc alable databases, the scenario is more focused on building its web tier and not the database tier. Hence, the correct answer is Elastic Load Balancing , Amazon EC2, and Auto Scaling. The option that says: Elastic Load Balancing, Amazo n RDS with Multi-AZ, and Amazon S3 is incorrect because you can't host your web tier using Amazon S 3 since the application is doing a dynamic transact ions. Amazon S3 is only suitable if you plan to have a st atic website. The option that says: Amazon RDS with Multi-AZ and Auto Scaling is incorrect because the focus of the question is building a scalable web tier. You need a service, like EC2, in which you can run your web tier. The option that says: Amazon EC2, Amazon DynamoDB, and Amazon S3 is incorrect because you need Auto Scaling and ELB in order to scale the web tier . References: https://media.amazonwebservices.com/AWS_Building_Fa ult_Tolerant_Applications.pdf https://d1.awsstatic.com/whitepapers/aws-building-f ault-tolerant-applications.pdf https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ec2-increase-availability.html", "references": "" }, { "question": ": 42 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A game development company operates several virtual reality (VR) and augmented reality (AR) games which use various RESTful web APIs hosted on their on-premises data center. Due to the unprecedented growth of their company, they decided to migrate th eir system to AWS Cloud to scale out their resource s as well to minimize costs. Which of the following should you recommend as the most cost-effective and scalable solution to meet t he above requirement?", "options": [ "A. Use AWS Lambda and Amazon API Gateway.", "B. Set up a micro-service architecture with ECS, ECR, and Fargate.", "C. Use a Spot Fleet of Amazon EC2 instances, each with an Elastic Fabric Adapter (EFA) for more consistent latency and higher network throughp ut. Set up an Application Load Balancer", "D. Host the APIs in a static S3 web hosting bucke t behind a CloudFront web distribution." ], "correct": "A. Use AWS Lambda and Amazon API Gateway.", "explanation": "Explanation With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. Lambda counts a request each time it starts executi ng in response to an event notification or invoke c all, including test invokes from the console. You are ch arged for the total number of requests across all y our functions. Duration is calculated from the time your code begi ns executing until it returns or otherwise terminat es, rounded up to the nearest 1ms. The price depends on the amount of memory you allocate to your function . The Lambda free tier includes 1M free requests per month and over 400,000 GB-seconds of compute time per month. 43 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The best possible answer here is to use a combinati on of AWS Lambda and Amazon API Gateway because this solution is both scalable and cost-effective. You will only be charged when you use your Lambda function, unlike having an EC2 instance that always runs even though you don't use it. Hence, the correct answer is: Use AWS Lambda and Am azon API Gateway. Setting up a micro-service architecture with ECS, E CR, and Fargate is incorrect because ECS is mainly used to host Docker applications, and in add ition, using ECS, ECR, and Fargate alone is not scalable and not recommended for this type of scena rio. Hosting the APIs in a static S3 web hosting bucket behind a CloudFront web distribution is not a suitable option as there is no compute capability f or S3 and you can only use it as a static website. Although this solution is scalable since uses Cloud Front, the use of S3 to host the web APIs or the dy namic website is still incorrect. The option that says: Use a Spot Fleet of Amazon EC 2 instances, each with an Elastic Fabric Adapter (EFA) for more consistent latency and higher networ k throughput. Set up an Application Load Balancer to distribute traffic to the instances is incorrect. EC2 alone, without Auto Scaling, is not scalable. Even though you use Spot EC2 instance, it is still more expensive compared to Lambda because you will be charged only when your function is bein g used. An Elastic Fabric Adapter (EFA) is simply a network device that you can attach to your Amazon E C2 instance that enables you to achieve the application performance of an on-premises HPC clust er, with scalability, flexibility, and elasticity p rovided 44 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam by the AWS Cloud. Although EFA is scalable, the Spo t Fleet configuration of this option doesn't have A uto Scaling involved. References: https://docs.aws.amazon.com/apigateway/latest/devel operguide/getting-started-with-lambda- integration.html https://aws.amazon.com/lambda/pricing/ Check out this AWS Lambda Cheat Sheet: https://tutorialsdojo.com/aws-lambda/ EC2 Container Service (ECS) vs Lambda: https://tutorialsdojo.com/ec2-container-service-ecs -vs-lambda/", "references": "" }, { "question": ": A computer animation film studio has a web applicat ion running on an Amazon EC2 instance. It uploads 5 GB video objects to an Amazon S3 bucket. Video uplo ads are taking longer than expected, which impacts the performance of your application. Which method will help improve the performance of t he application?", "options": [ "A. Leverage on Amazon CloudFront and use HTTP POS T method to reduce latency.", "B. Use Amazon S3 Multipart Upload API.", "C. Enable Enhanced Networking with the Elastic Ne twork Adapter (ENA) on your EC2", "D. Use Amazon Elastic Block Store Provisioned IOP S and an Amazon EBS-optimized instance." ], "correct": "B. Use Amazon S3 Multipart Upload API.", "explanation": "Explanation The main issue is the slow upload time of the video objects to Amazon S3. To address this issue, you c an use Multipart upload in S3 to improve the throughpu t. It allows you to upload parts of your object in parallel thus, decreasing the time it takes to uplo ad big objects. Each part is a contiguous portion o f the object's data. 45 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam You can upload these object parts independently and in any order. If transmission of any part fails, y ou can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazo n S3 assembles these parts and creates the object. In ge neral, when your object size reaches 100 MB, you sh ould consider using multipart uploads instead of uploadi ng the object in a single operation. Using multipart upload provides the following advan tages: Improved throughput - You can upload parts in paral lel to improve throughput. Quick recovery from any network issues - Smaller pa rt size minimizes the impact of restarting a failed upload due to a network error. Pause and resume object uploads - You can upload ob ject parts over time. Once you initiate a multipart upload, there is no expiry; you must explicitly com plete or abort the multipart upload. Begin an upload before you know the final object si ze - You can upload an object as you are creating i t. Enabling Enhanced Networking with the Elastic Netwo rk Adapter (ENA) on your EC2 Instances is incorrect. Even though this will improve network pe rformance, the issue will still persist since the p roblem lies in the upload time of the object to Amazon S3. You should use the Multipart upload feature instea d. Leveraging on Amazon CloudFront and using HTTP POST method to reduce latency is incorrect because CloudFront is a CDN service and is not used to expedite the upload process of objects to Amazo n 46 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam S3. Amazon CloudFront is a fast content delivery ne twork (CDN) service that securely delivers data, videos, applications, and APIs to customers globall y with low latency, high transfer speeds, all withi n a developer-friendly environment. Using Amazon Elastic Block Store Provisioned IOPS a nd an Amazon EBS-optimized instance is incorrect. Although the use of Amazon Elastic Block Store Provisioned IOPS will speed up the I/O performance of the EC2 instance, the root cause is still not resolved sinc e the primary problem here is the slow video upload to Amazon S3. There is no network contention in the EC 2 instance. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/upl oadobjusingmpu.html http://docs.aws.amazon.com/AmazonS3/latest/dev/qfac ts.html Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": A company deployed a web application to an EC2 inst ance that adds a variety of photo effects to a pict ure uploaded by the users. The application will put the generated photos to an S3 bucket by sending PUT requests to the S3 API. What is the best option for this scenario consideri ng that you need to have API credentials to be able to send a request to the S3 API?", "options": [ "A. Encrypt the API credentials and store in any d irectory of the EC2 instance.", "B. Store the API credentials in the root web appl ication directory of the EC2 instance.", "C. Store your API credentials in Amazon Glacier.", "D. Create a role in IAM. Afterwards, assign this role to a new EC2 instance." ], "correct": "D. Create a role in IAM. Afterwards, assign this role to a new EC2 instance.", "explanation": "Explanation The best option is to create a role in IAM. Afterwa rd, assign this role to a new EC2 instance. Applica tions must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your a pplications that run on EC2 instances. 47 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam You can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests while protecting your credentials from other users. However, it's challenging to securely distribute cr edentials to each instance, especially those that A WS creates on your behalf such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the creden tials on each instance when you rotate your AWS credentia ls. In this scenario, you have to use IAM roles so that your applications can securely make API requests f rom your instances without requiring you to manage the security credentials that the applications use. Ins tead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles. Hence, the correct answer is: Create a role in IAM. Afterwards, assign this role to a new EC2 instance . The option that says: Encrypt the API credentials a nd storing in any directory of the EC2 instance and Store the API credentials in the root web applicati on directory of the EC2 instance are incorrect. Though you can store and use the API credentials in the EC2 instance, it will be difficult to manage j ust as mentioned above. You have to use IAM Roles. The option that says: Store your API credentials in Amazon S3 Glacier is incorrect as Amazon S3 Glacier is used for data archives and not for manag ing API credentials.", "references": "http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ iam-roles-for-amazon-ec2.html 48 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" }, { "question": ": A company has an application that uses multiple EC2 instances located in various AWS regions such as U S East (Ohio), US West (N. California), and EU (Irela nd). The manager instructed the Solutions Architect to set up a latency-based routing to route incoming tr affic for www.tutorialsdojo.com to all the EC2 inst ances across all AWS regions. Which of the following options can satisfy the give n requirement?", "options": [ "A. Use a Network Load Balancer to distribute the load to the multiple EC2 instances across all", "B. Use AWS DataSync to distribute the load to the multiple EC2 instances across all AWS", "D. Use Route 53 to distribute the load to the mul tiple EC2 instances across all AWS Regions." ], "correct": "D. Use Route 53 to distribute the load to the mul tiple EC2 instances across all AWS Regions.", "explanation": "Explanation If your application is hosted in multiple AWS Regio ns, you can improve performance for your users by serving their requests from the AWS Region that pro vides the lowest latency. You can create latency records for your resources i n multiple AWS Regions by using latency-based routi ng. In the event that Route 53 receives a DNS query for your domain or subdomain such as tutorialsdojo.com or portal.tutorialsdojo.com, it determines which AW S Regions you've created latency records for, determines which region gives the user the lowest l atency and then selects a latency record for that r egion. Route 53 responds with the value from the selected record which can be the IP address for a web server or the CNAME of your elastic load balancer. Hence, using Route 53 to distribute the load to the multiple EC2 instances across all AWS Regions is the correct answer. 49 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Using a Network Load Balancer to distribute the loa d to the multiple EC2 instances across all AWS Regions and using an Application Load Balancer to d istribute the load to the multiple EC2 instances across all AWS Regions are both incorrect because l oad balancers distribute traffic only within their respective regions and not to other AWS regions by default. Although Network Load Balancers support connections from clients to IP-based targets in pee red VPCs across different AWS Regions, the scenario didn't mention that the VPCs are peered with each o ther. It is best to use Route 53 instead to balance the incoming load to two or more AWS regions more effec tively. Using AWS DataSync to distribute the load to the mu ltiple EC2 instances across all AWS Regions is incorrect because the AWS DataSync service simply p rovides a fast way to move large amounts of data online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). References: https://docs.aws.amazon.com/Route53/latest/Develope rGuide/routing-policy.html#routing-policy-latency https://docs.aws.amazon.com/Route53/latest/Develope rGuide/TutorialAddingLBRRegion.html Check out this Amazon Route 53 Cheat Sheet: https://tutorialsdojo.com/amazon-route-53/", "references": "" }, { "question": ": A commercial bank has designed its next-generation online banking platform to use a distributed system architecture. As their Software Architect, you have to ensure that their architecture is highly scalab le, yet still cost-effective. Which of the following will p rovide the most suitable solution for this scenario ? 50 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "options": [ "A. Launch an Auto-Scaling group of EC2 instances to host your application services and an", "B. Launch multiple On-Demand EC2 instances to hos t your application services and an SQS", "C. Launch multiple EC2 instances behind an Applic ation Load Balancer to host your", "D. Launch multiple EC2 instances behind an Applic ation Load Balancer to host your" ], "correct": "A. Launch an Auto-Scaling group of EC2 instances to host your application services and an", "explanation": "Explanation There are three main parts in a distributed messagi ng system: the components of your distributed syste m which can be hosted on EC2 instance; your queue (di stributed on Amazon SQS servers); and the messages in the queue. To improve the scalability of your distributed syst em, you can add Auto Scaling group to your EC2 instances. References: https://docs.aws.amazon.com/autoscaling/ec2/usergui de/as-using-sqs-queue.html https://docs.aws.amazon.com/AWSSimpleQueueService/l atest/SQSDeveloperGuide/sqs-basic- architecture.html Check out this AWS Auto Scaling Cheat Sheet: https://tutorialsdojo.com/aws-auto-scaling/", "references": "" }, { "question": ": A company plans to host a movie streaming app in AW S. The chief information officer (CIO) wants to ensure that the application is highly available and scalable. The application is deployed to an Auto S caling group of EC2 instances on multiple AZs. A load bala ncer must be configured to distribute incoming requests evenly to all EC2 instances across multipl e Availability Zones. 51 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Which of the following features should the Solution s Architect use to satisfy these criteria?", "options": [ "A. AWS Direct Connect SiteLink", "B. Cross-zone load balancing", "C. Amazon VPC IP Address Manager (IPAM)", "D. Path-based Routing" ], "correct": "B. Cross-zone load balancing", "explanation": "Explanation The nodes for your load balancer distribute request s from clients to registered targets. When cross-zo ne load balancing is enabled, each load balancer node distributes traffic across the registered targets i n all enabled Availability Zones. When cross-zone load ba lancing is disabled, each load balancer node distributes traffic only across the registered targ ets in its Availability Zone. The following diagrams demonstrate the effect of cr oss-zone load balancing. There are two enabled Availability Zones, with two targets in Availabilit y Zone A and eight targets in Availability Zone B. Clients send requests, and Amazon Route 53 responds to each request with the IP address of one of the load balancer nodes. This distributes traffic such that each load balancer node receives 50% of the traffic from the clients. Each load balancer node distributes it s share of the traffic across the registered target s in its scope. If cross-zone load balancing is enabled, each of th e 10 targets receives 10% of the traffic. This is b ecause each load balancer node can route 50% of the client traffic to all 10 targets. 52 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam If cross-zone load balancing is disabled: Each of the two targets in Availability Zone A rece ives 25% of the traffic. Each of the eight targets in Availability Zone B re ceives 6.25% of the traffic. This is because each load balancer node can route 5 0% of the client traffic only to targets in its Ava ilability Zone. 53 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam With Application Load Balancers, cross-zone load ba lancing is always enabled. With Network Load Balancers and Gateway Load Balanc ers, cross-zone load balancing is disabled by default. After you create the load balancer, you ca n enable or disable cross-zone load balancing at an y time. When you create a Classic Load Balancer, the defaul t for cross-zone load balancing depends on how you create the load balancer. With the API or CLI, cros s-zone load balancing is disabled by default. With the AWS Management Console, the option to enable cross- zone load balancing is selected by default. After you create a Classic Load Balancer, you can enable or disable cross-zone load balancing at any time Hence, the right answer is to enable cross-zone loa d balancing. Amazon VPC IP Address Manager (IPAM) is incorrect b ecause this is merely a feature in Amazon VPC that provides network administrators with an automa ted IP management workflow. It does not enable your load balancers to distribute incoming requests even ly to all EC2 instances across multiple Availabilit y Zones. Path-based Routing is incorrect because this featur e is based on the paths that are in the URL of the request. It automatically routes traffic to a parti cular target group based on the request URL. This f eature will not set each of the load balancer nodes to dis tribute traffic across the registered targets in al l enabled Availability Zones. 54 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam AWS Direct Connect SiteLink is incorrect because th is is a feature of AWS Direct Connect connection and not of Amazon Elastic Load Balancing. The AWS D irect Connect SiteLink feature simply lets you create connections between your on-premises network s through the AWS global network backbone. References: https://docs.aws.amazon.com/elasticloadbalancing/la test/userguide/how-elastic-load-balancing-works.htm l https://aws.amazon.com/elasticloadbalancing/feature s https://aws.amazon.com/blogs/aws/network-address-ma nagement-and-auditing-at-scale-with-amazon-vpc- ip-address-manager/ AWS Elastic Load Balancing Overview: https://youtu.be/UBl5dw59DO 8 Check out this AWS Elastic Load Balancing (ELB) Che at Sheet: https://tutorialsdojo.com/aws-elastic-load-balancin g-elb/", "references": "" }, { "question": ": A software development company needs to connect its on-premises infrastructure to the AWS cloud. Which of the following AWS services can you use to accomplish this? (Select TWO.)", "options": [ "A. NAT Gateway", "B. VPC Peering", "C. IPsec VPN connection", "D. AWS Direct Connect" ], "correct": "", "explanation": "Explanation You can connect your VPC to remote networks by usin g a VPN connection which can be IPsec VPN connection, AWS VPN CloudHub, or a third party soft ware VPN appliance. A VPC VPN Connection utilizes IPSec to establish encrypted network conne ctivity between your intranet and Amazon VPC over t he Internet. 55 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam AWS Direct Connect is a network service that provid es an alternative to using the Internet to connect customer's on-premises sites to AWS. AWS Direct Con nect does not involve the Internet; instead, it use s dedicated, private network connections between your intranet and Amazon VPC. Hence, IPsec VPN connection and AWS Direct Connect are the correct answers. Amazon Connect is incorrect because this is not a V PN connectivity option. It is actually a self-servi ce, cloud-based contact center service in AWS that make s it easy for any business to deliver better custom er service at a lower cost. Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world to pow er millions of customer conversations. VPC Peering is incorrect because this is a networki ng connection between two VPCs only, which enables you to route traffic between them privately. This c an't be used to connect your on-premises network to your VPC. NAT Gateway is incorrect because you only use a net work address translation (NAT) gateway to enable instances in a private subnet to connect to the Int ernet or other AWS services, but prevent the Intern et from initiating a connection with those instances. This is not used to connect to your on-premises network. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpn-connections.html 56 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://aws.amazon.com/directconnect/faqs Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A web application is hosted on a fleet of EC2 insta nces inside an Auto Scaling Group with a couple of Lambda functions for ad hoc processing. Whenever yo u release updates to your application every week, there are inconsistencies where some resources are not updated properly. You need a way to group the resources together and deploy the new version of yo ur code consistently among the groups with minimal downtime. Which among these options should you do to satisfy the given requirement with the least effort?", "options": [ "A. Use CodeCommit to publish your code quickly in a private repository and push them to", "B. Use deployment groups in CodeDeploy to automat e code deployments in a consistent", "C. Create CloudFormation templates that have the latest configurations and code in them.", "D. Create OpsWorks recipes that will automaticall y launch resources containing the latest" ], "correct": "B. Use deployment groups in CodeDeploy to automat e code deployments in a consistent", "explanation": "Explanation CodeDeploy is a deployment service that automates a pplication deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functio ns. It allows you to rapidly release new features, update Lambda function versions, avoid downtime during app lication deployment, and handle the complexity of updating your applications, without many of the ris ks associated with error-prone manual deployments. 57 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Creating CloudFormation templates that have the lat est configurations and code in them is incorrect since this is used for provisioning and managing st acks of AWS resources based on templates you create to model your infrastructure architecture. CloudFormat ion is recommended if you want a tool for granular control over the provisioning and management of you r own infrastructure. Using CodeCommit to publish your code quickly in a private repository and pushing them to your resources for fast updates is incorrect as you main ly use CodeCommit for managing a source-control service that hosts private Git repositories. You ca n store anything from code to binaries and work seamlessly with your existing Git- based tools. CodeCommit integrates with CodePipelin e and CodeDeploy to streamline your development and release process. You could also use OpsWorks to deploy your code, ho wever, creating OpsWorks recipes that will automatically launch resources containing the lates t version of the code is still incorrect because yo u don't need to launch new resources containing your new code when you can just update the ones that are already running. References: 58 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://docs.aws.amazon.com/codedeploy/latest/userg uide/deployment-groups.html https://docs.aws.amazon.com/codedeploy/latest/userg uide/welcome.html Check out this AWS CodeDeploy Cheat Sheet: https://tutorialsdojo.com/aws-codedeploy/ AWS CodeDeploy - Primary Components: https://www.youtube.com/watch?v=ClWBJT6k20Q Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/", "references": "" }, { "question": ": A global medical research company has a molecular i maging system that provides each client with frequently updated images of what is happening insi de the human body at the molecular and cellular lev els. The system is hosted in AWS and the images are host ed in an S3 bucket behind a CloudFront web distribution. When a fresh batch of images is uploa ded to S3, it is required to keep the previous ones in order to prevent them from being overwritten. Which of the following is the most suitable solutio n to solve this issue?", "options": [ "A. Use versioned objects", "B. Invalidate the files in your CloudFront web di stribution", "C. Add Cache-Control no-cache, no-store, or priva te directives in the S3 bucket", "D. Add a separate cache behavior path for the con tent and configure a custom object caching" ], "correct": "A. Use versioned objects", "explanation": "Explanation To control the versions of files that are served fr om your distribution, you can either invalidate fil es or give them versioned file names. If you want to update yo ur files frequently, AWS recommends that you primarily use file versioning for the following rea sons: 59 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam - Versioning enables you to control which file a re quest returns even when the user has a version cach ed either locally or behind a corporate caching proxy. If you invalidate the file, the user might continu e to see the old version until it expires from those caches. - CloudFront access logs include the names of your files, so versioning makes it easier to analyze the results of file changes. - Versioning provides a way to serve different vers ions of files to different users. - Versioning simplifies rolling forward and back be tween file revisions. - Versioning is less expensive. You still have to p ay for CloudFront to transfer new versions of your files to edge locations, but you don't have to pay for inval idating files. Invalidating the files in your CloudFront web distr ibution is incorrect because even though using invalidation will solve this issue, this solution i s more expensive as compared to using versioned obj ects. Adding a separate cache behavior path for the conte nt and configuring a custom object caching with a Minimum TTL of 0 is incorrect because this alone is not enough to solve the problem. A cache behavio r is primarily used to configure a variety of CloudFr ont functionality for a given URL path pattern for files on your website. Although this solution may work, i t is still better to use versioned objects where yo u can control which image will be returned by the system even when the user has another version cached eithe r locally or behind a corporate caching proxy. Adding Cache-Control no-cache, no-store, or private directives in the S3 bucket is incorrect because although it is right to configure your origin to ad d the Cache-Control or Expires header field, you sh ould do this to your objects and not on the entire S3 bu cket. References: https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/UpdatingExistingObjects.html https://aws.amazon.com/premiumsupport/knowledge-cen ter/prevent-cloudfront-from-caching-files/ https://docs.aws.amazon.com/AmazonCloudFront/latest /DeveloperGuide/Invalidation.html#PayingForInva lidation Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/", "references": "" }, { "question": ": 60 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A company is using an Amazon RDS for MySQL 5.6 with Multi-AZ deployment enabled and several web servers across two AWS Regions. The database is cur rently experiencing highly dynamic reads due to the growth of the company's website. The Solutions Arch itect tried to test the read performance from the secondary AWS Region and noticed a notable slowdown on the SQL queries. Which of the following options would provide a read replication latency of less than 1 second?", "options": [ "A. Use Amazon ElastiCache to improve database per formance.", "B. Migrate the existing database to Amazon Aurora and create a cross-region read replica.", "C. Create an Amazon RDS for MySQL read replica in the secondary AWS Region.", "D. Upgrade the MySQL database engine." ], "correct": "B. Migrate the existing database to Amazon Aurora and create a cross-region read replica.", "explanation": "Explanation Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of tradit ional enterprise databases with the simplicity and cost- effectiveness of open source databases. Amazon Auro ra is up to five times faster than standard MySQL databases and three times faster than standard Post greSQL databases. It provides the security, availability, and reliabi lity of commercial databases at 1/10th the cost. Am azon Aurora is fully managed by Amazon RDS, which automa tes time-consuming administration tasks like hardware provisioning, database setup, patching, an d backups. Based on the given scenario, there is a significant slowdown after testing the read performance from t he secondary AWS Region. Since the existing setup is a n Amazon RDS for MySQL, you should migrate the database to Amazon Aurora and create a cross-region read replica. 61 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The read replication latency of less than 1 second is only possible if you would use Amazon Aurora replicas. Aurora replicas are independent endpoints in an Aurora DB cluster, best used for scaling rea d operations and increasing availability. You can cre ate up to 15 replicas within an AWS Region. Hence, the correct answer is: Migrate the existing database to Amazon Aurora and create a cross- region read replica. The option that says: Upgrade the MySQL database en gine is incorrect because upgrading the database engine wouldn't improve the read replication latenc y to milliseconds. To achieve the read replication latency of less than 1-second requirement, you need to use Amazon Aurora replicas. The option that says: Use Amazon ElastiCache to imp rove database performance is incorrect. Amazon ElastiCache won't be able to improve the database p erformance because it is experiencing highly dynami c reads. This option would be helpful if the database frequently receives the same queries. The option that says: Create an Amazon RDS for MySQ L read replica in the secondary AWS Region is incorrect because MySQL replicas won't provide y ou a read replication latency of less than 1 second . RDS Read Replicas can only provide asynchronous rep lication in seconds and not in milliseconds. You have to use Amazon Aurora replicas in this scenario . References: https://aws.amazon.com/rds/aurora/faqs/ 62 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AmazonRDS/latest/Aurora UserGuide/AuroraMySQL.Replication.CrossRegi on.html Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", "references": "" }, { "question": ":A construction company has an online system that tr acks all of the status and progress of their projec ts. The system is hosted in AWS and there is a requirement to monitor the read and write IOPs metrics for thei r MySQL RDS instance and send real-time alerts to the ir DevOps team. Which of the following services in AWS can you use to meet the requirements? (Select TWO.)", "options": [ "A. Amazon Simple Queue Service", "B. CloudWatch", "C. Route 53", "D. SWF" ], "correct": "", "explanation": "Explanation In this scenario, you can use CloudWatch to monitor your AWS resources and SNS to provide notification . Hence, the correct answers are CloudWatch and Amazo n Simple Notification Service. Amazon Simple Notification Service (SNS) is a flexi ble, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients. Amazon CloudWatch is a monitoring service for AWS c loud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and t rack metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. SWF is incorrect because this is mainly used for ma naging workflows and not for monitoring and notifications. 63 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Amazon Simple Queue Service is incorrect because th is is a messaging queue service and not suitable fo r this kind of scenario. Route 53 is incorrect because this is primarily use d for routing and domain name registration and management. References: http://docs.aws.amazon.com/AmazonCloudWatch/latest/ monitoring/CW_Support_For_AWS.html https://aws.amazon.com/sns/ Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", "references": "" }, { "question": ": A company has several EC2 Reserved Instances in the ir account that need to be decommissioned and shut down since they are no longer used by the developme nt team. However, the data is still required by the audit team for compliance purposes. Which of the following steps can be taken in this s cenario? (Select TWO.)", "options": [ "A. Stop all the running EC2 instances.", "B. Convert the EC2 instance to On-Demand instance s", "C. Take snapshots of the EBS volumes and terminat e the EC2 instances.", "D. You can opt to sell these EC2 instances on the AWS Reserved Instance Marketplace" ], "correct": "", "explanation": "Explanation Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-s cale cloud computing easier for developers. Amazon EC2's simple web service interface allows you to ob tain and configure capacity with minimal friction. It provides you with complete control of your computin g resources and lets you run on Amazon's proven computing environment. The first requirement as per the scenario is to dec ommission and shut down several EC2 Reserved Instances. However, it is also mentioned that the a udit team still requires the data for compliance pu rposes. 64 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam To fulfill the given requirements, you can first cr eate a snapshot of the instance to save its data an d then sell the instance to the Reserved Instance Marketplace. The Reserved Instance Marketplace is a platform tha t supports the sale of third-party and AWS customer s' unused Standard Reserved Instances, which vary in t erms of length and pricing options. For example, yo u may want to sell Reserved Instances after moving in stances to a new AWS region, changing to a new instance type, ending projects before the term expi ration, when your business needs change, or if you have unneeded capacity. Hence, the correct answers are: - You can opt to sell these EC2 instances on the AW S Reserved Instance Marketplace. - Take snapshots of the EBS volumes and terminate t he EC2 instances. The option that says: Convert the EC2 instance to O n-Demand instances is incorrect because it's stated in the scenario that the development team no longer needs several EC2 Reserved Instances. By convertin g it to On-Demand instances, the company will still h ave instances running in their infrastructure and t his will result in additional costs. The option that says: Convert the EC2 instances to Spot instances with a persistent Spot request type is incorrect because the requirement in the scenari o is to terminate or shut down several EC2 Reserved Instances. Converting the existing instances to Spo t instances will not satisfy the given requirement. The option that says: Stop all the running EC2 inst ances is incorrect because doing so will still incu r storage cost. Take note that the requirement in the scenario is to decommission and shut down several EC2 Reserved Instances. Therefore, this approach won't fulfill the given requirement. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ri-market-general.html https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /ebs-creating-snapshot.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/ AWS Container Services Overview: https://www.youtube.com/watch?v=5QBgDX7O7pw", "references": "" }, { "question": ": 65 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A top university has recently launched its online l earning portal where the students can take e-learni ng courses from the comforts of their homes. The porta l is on a large On-Demand EC2 instance with a singl e Amazon Aurora database. How can you improve the availability of your Aurora database to prevent any unnecessary downtime of th e online portal?", "options": [ "A. Use an Asynchronous Key Prefetch in Amazon Aur ora to improve the performance of", "B. Enable Hash Joins to improve the database quer y performance.", "C. Deploy Aurora to two Auto-Scaling groups of EC 2 instances across two Availability Zones", "D. Create Amazon Aurora Replicas." ], "correct": "D. Create Amazon Aurora Replicas.", "explanation": "Explanation Amazon Aurora MySQL and Amazon Aurora PostgreSQL su pport Amazon Aurora Replicas, which share the same underlying volume as the primary ins tance. Updates made by the primary are visible to a ll Amazon Aurora Replicas. With Amazon Aurora MySQL, y ou can also create MySQL Read Replicas based on MySQL's binlog-based replication engine. In MySQ L Read Replicas, data from your primary instance is replayed on your replica as transactions. For most use cases, including read scaling and high availabi lity, it is recommended using Amazon Aurora Replicas. 66 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Read Replicas are primarily used for improving the read performance of the application. The most suita ble solution in this scenario is to use Multi-AZ deploy ments instead but since this option is not availabl e, you can still set up Read Replicas which you can promot e as your primary stand-alone DB cluster in the eve nt of an outage. Hence, the correct answer here is to create Amazon Aurora Replicas. Deploying Aurora to two Auto-Scaling groups of EC2 instances across two Availability Zones with an elastic load balancer which handles load balanci ng is incorrect because Aurora is a managed database engine for RDS and not deployed on typical EC2 instances that you manually provision. Enabling Hash Joins to improve the database query p erformance is incorrect because Hash Joins are mainly used if you need to join a large amount of d ata by using an equijoin and not for improving availability. Using an Asynchronous Key Prefetch in Amazon Aurora to improve the performance of queries that join tables across indexes is incorrect because the Asynchronous Key Prefetch is mainly used to improv e the performance of queries that join tables across indexes. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/AuroraMySQL.BestPractices.html https://aws.amazon.com/rds/aurora/faqs/ 67 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Amazon Aurora Overview: https://youtu.be/iwS1h7rLNBQ Check out this Amazon Aurora Cheat Sheet: https://tutorialsdojo.com/amazon-aurora/", "references": "" }, { "question": ": A global news network created a CloudFront distribu tion for their web application. However, you notice d that the application's origin server is being hit f or each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects. What could be a possible cause of this issue?", "options": [ "A. The file sizes of the cached objects are too l arge for CloudFront to handle.", "B. An object is only cached by Cloudfront once a successful request has been made hence, the", "C. There are two primary origins configured in yo ur Amazon CloudFront Origin Group.", "D. The Cache-Control max-age directive is set to zero." ], "correct": "D. The Cache-Control max-age directive is set to zero.", "explanation": "Explanation You can control how long your objects stay in a Clo udFront cache before CloudFront forwards another request to your origin. Reducing the duration allow s you to serve dynamic content. Increasing the dura tion means your users get better performance because you r objects are more likely to be served directly fro m the edge cache. A longer duration also reduces the load on your origin. Typically, CloudFront serves an object from an edge location until the cache duration that you specifi ed passes -- that is, until the object expires. After it expires, the next time the edge location gets a user request for the object, CloudFront forwards the request to the origin server to verify that the cache contains the latest version of the object. 68 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The Cache-Control and Expires headers control how l ong objects stay in the cache. The Cache-Control max-age directive lets you specify how long (in sec onds) you want an object to remain in the cache bef ore CloudFront gets the object again from the origin server. The m inimum expiration time CloudFront supports is 0 seconds for web distributions and 3600 seconds for RTMP distributions. In this scenario, the main culprit is that the Cach e-Control max-age directive is set to a low value, which is why the request is always directed to your origin s erver. Hence, the correct answer is: The Cache-Control max -age directive is set to zero. The option that says: An object is only cached by C loudFront once a successful request has been made hence, the objects were not requested before, which is why the request is still directed to the origin server is incorrect because the issue also occurs e ven for the commonly requested objects. This means that these objects were successfully requested before bu t due to a zero Cache-Control max-age directive val ue, it causes this issue in CloudFront. 69 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: The file sizes of the cached objects are too large for CloudFront to handle is incorrect because this is not related to the issue in caching. The option that says: There are two primary origins configured in your Amazon CloudFront Origin Group is incorrect because you cannot set two origi ns in CloudFront in the first place. An origin grou p includes two origins which are the primary origin a nd the second origin that will be used for the actu al failover. It also includes the failover criteria th at you need to specify. In this scenario, the issue is more on the cache hit ratio and not about origin failovers.", "references": "http://docs.aws.amazon.com/AmazonCloudFront/latest/ DeveloperGuide/Expiration.html Check out this Amazon CloudFront Cheat Sheet: https://tutorialsdojo.com/amazon-cloudfront/" }, { "question": ": A startup is building IoT devices and monitoring ap plications. They are using IoT sensors to monitor t he traffic in real-time by using an Amazon Kinesis Str eam that is configured with default settings. It th en sends the data to an Amazon S3 bucket every 3 days. When you checked the data in S3 on the 3rd day, on ly the data for the last day is present and no data is present from 2 days ago. Which of the following is the MOST likely cause of this issue?", "options": [ "A. Someone has manually deleted the record in Ama zon S3.", "B. Amazon S3 bucket has encountered a data loss.", "C. The access of the Kinesis stream to the S3 buc ket is insufficient.", "D. By default, data records in Kinesis are only a ccessible for 24 hours from the time they are" ], "correct": "D. By default, data records in Kinesis are only a ccessible for 24 hours from the time they are", "explanation": "Explanation By default, records of a stream in Amazon Kinesis a re accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to up to 7 days by enabling extended data retention. 70 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Hence, the correct answer is: By default, data reco rds in Kinesis are only accessible for 24 hours fro m the time they are added to a stream. The option that says: Amazon S3 bucket has encounte red a data loss is incorrect because Amazon S3 rarely experiences data loss. Amazon has an SLA for S3 that it commits to its customers. Amazon S3 Standard, S3 StandardIA, S3 One Zone-IA, and S3 Gla cier are all designed to provide 99.999999999% durability of objects over a given year. This durab ility level corresponds to an average annual expect ed loss of 0.000000001% of objects. Hence, Amazon S3 bucket data loss is highly unlikely. The option that says: Someone has manually deleted the record in Amazon S3 is incorrect because if someone has deleted the data, this should have been visible in CloudTrail. Also, deleting that much da ta manually shouldn't have occurred in the first place if you have put in the appropriate security measur es. The option that says: The access of the Kinesis str eam to the S3 bucket is insufficient is incorrect because having insufficient access is highly unlike ly since you are able to access the bucket and view the contents of the previous day's data collected by Ki nesis.", "references": "https://aws.amazon.com/kinesis/data-streams/faqs/ https://docs.aws.amazon.com/AmazonS3/latest/dev/Dat aDurability.html Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/ 71 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam" }, { "question": ": A Solutions Architect is implementing a new High-Pe rformance Computing (HPC) system in AWS that involves orchestrating several Amazon Elastic Conta iner Service (Amazon ECS) tasks with an EC2 launch type that is part of an Amazon ECS cluster. The sys tem will be frequently accessed by users around the globe and it is expected that there would be hundre ds of ECS tasks running most of the time. The Archi tect must ensure that its storage system is optimized fo r high-frequency read and write operations. The out put data of each ECS task is around 10 MB but the obsol ete data will eventually be archived and deleted so the total storage size won't exceed 10 TB. Which of the following is the MOST suitable solutio n that the Architect should recommend?", "options": [ "A. Launch an Amazon Elastic File System (Amazon E FS) file system with Bursting Throughput mode and set the performance mode to Gen eral Purpose. Configure the EFS file", "B. Launch an Amazon Elastic File System (Amazon E FS) with Provisioned Throughput mode", "C. Launch an Amazon DynamoDB table with Amazon Dy namoDB Accelerator (DAX) and", "D. Set up an SMB file share by creating an Amazon FSx File Gateway in Storage Gateway. Set" ], "correct": "B. Launch an Amazon Elastic File System (Amazon E FS) with Provisioned Throughput mode", "explanation": "Explanation Amazon Elastic File System (Amazon EFS) provides si mple, scalable file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files. Your applications can hav e the storage they need when they need it. You can use Amazon EFS file systems with Amazon ECS to access file system data across your fleet of Amazon ECS tasks. That way, your tasks have access to the same persistent storage, no matter the infrastructure or container instance on which they land. When you reference your Amazon EFS file syste m and container mount point in your Amazon ECS task d efinition, Amazon ECS takes care of mounting the file system in your container. 72 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam To support a wide variety of cloud storage workload s, Amazon EFS offers two performance modes: - General Purpose mode - Max I/O mode. You choose a file system's performance mode when yo u create it, and it cannot be changed. The two performance modes have no additional costs, so your Amazon EFS file system is billed and metered the same, regardless of your performance mode. There are two throughput modes to choose from for y our file system: - Bursting Throughput 73 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam - Provisioned Throughput With Bursting Throughput mode, a file system's thro ughput scales as the amount of data stored in the E FS Standard or One Zone storage class grows. File-base d workloads are typically spiky, driving high level s of throughput for short periods of time, and low level s of throughput the rest of the time. To accommodat e this, Amazon EFS is designed to burst to high throu ghput levels for periods of time. Provisioned Throughput mode is available for applic ations with high throughput to storage (MiB/s per T iB) ratios, or with requirements greater than those all owed by the Bursting Throughput mode. For example, say you're using Amazon EFS for development tools, web serving, or content management applications where the amount of data in your file system is low relat ive to throughput demands. Your file system can now get the high levels of throughput your applications req uire without having to pad your file system. In the scenario, the file system will be frequently accessed by users around the globe so it is expect ed that there would be hundreds of ECS tasks running most o f the time. The Architect must ensure that its stor age system is optimized for high-frequency read and wri te operations. Hence, the correct answer is: Launch an Amazon Elas tic File System (Amazon EFS) with Provisioned Throughput mode and set the performance mode to Max I/O. Configure the EFS file system as the container mount point in the ECS task definition of the Amazon ECS cluster. The option that says: Set up an SMB file share by c reating an Amazon FSx File Gateway in Storage Gateway. Set the file share as the container mount point in the ECS task definition of the Amazon ECS cluster is incorrect. Although you can use an A mazon FSx for Windows File Server in this situation , it is not appropriate to use this since the applica tion is not connected to an on-premises data center . Take note that the AWS Storage Gateway service is primar ily used to integrate your existing on-premises sto rage to AWS. The option that says: Launch an Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode and set the performance mode to Gen eral Purpose. Configure the EFS file system as the container mount point in the ECS task defini tion of the Amazon ECS cluster is incorrect because using Bursting Throughput mode won't be abl e to sustain the constant demand of the global application. Remember that the application will be frequently accessed by users around the world and t here are hundreds of ECS tasks running most of the time. The option that says: Launch an Amazon DynamoDB tab le with Amazon DynamoDB Accelerator (DAX) and DynamoDB Streams enabled. Configure the t able to be accessible by all Amazon ECS cluster instances. Set the DynamoDB table as the co ntainer mount point in the ECS task definition of the Amazon ECS cluster is incorrect because you can not directly set a DynamoDB table as a container mount point. In the first place, DynamoDB is a data base and not a file system which means that it can' t be \"mounted\" to a server. 74 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam References: https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/tutorial-efs-volumes.html https://docs.aws.amazon.com/efs/latest/ug/performan ce.html https://docs.aws.amazon.com/AmazonECS/latest/develo perguide/tutorial-wfsx-volumes.html Check out this Amazon EFS Cheat Sheet: https://tutorialsdojo.com/amazon-efs/", "references": "" }, { "question": ": A company has a fleet of running Spot EC2 instances behind an Application Load Balancer. The incoming traffic comes from various users across multiple AW S regions and you would like to have the user's ses sion shared among the fleet of instances. You are requir ed to set up a distributed session management layer that will provide a scalable and shared data storage for the user sessions. Which of the following would be the best choice to meet the requirement while still providing sub- millisecond latency for the users?", "options": [ "A. Multi-AZ RDS", "B. Elastic ache in-memory caching", "C. Multi-master DynamoDB", "D. ELB sticky sessions" ], "correct": "B. Elastic ache in-memory caching", "explanation": "Explanation For sub-millisecond latency caching, ElastiCache is the best choice. In order to address scalability a nd to provide a shared data storage for sessions that can be accessed from any individual web server, you ca n abstract the HTTP sessions from the web servers the mselves. A common solution for this is to leverage an In-Memory Key/Value store such as Redis and Memcach ed. 75 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam ELB sticky sessions is incorrect because the scenar io does not require you to route a user to the part icular web server that is managing that individual user's session. Since the session state is shared among th e instances, the use of the ELB sticky sessions featu re is not recommended in this scenario. Multi-master DynamoDB and Multi-AZ RDS are incorrec t. Although you can use DynamoDB and RDS for storing session state, these two are not the be st choices in terms of cost-effectiveness and perfo rmance when compared to ElastiCache. There is a significan t difference in terms of latency if you used Dynamo DB and RDS when you store the session data. References : https://aws.amazon.com/caching/session-management/ https://d0.awsstatic.com/whitepapers/performance-at -scale-with-amazon-elasticache.pdf Check out this Amazon Elasticache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis (cluster mode enabled vs disabled) vs Memcach ed: https://tutorialsdojo.com/redis-cluster-mode-enable d-vs-disabled-vs-memcached/", "references": "" }, { "question": ": 76 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A company is planning to deploy a High Performance Computing (HPC) cluster in its VPC that requires a scalable, high-performance file system. The storage service must be optimized for efficient workload processing, and the data must be accessible via a f ast and scalable file system interface. It should a lso work natively with Amazon S3 that enables you to easily process your S3 data with a high-performance POSIX interface. Which of the following is the MOST suitable service that you should use for this scenario?", "options": [ "A. Amazon Elastic File System (EFS) B. Amazon Elastic Block Storage (EBS)", "C. Amazon FSx for Lustre", "D. Amazon FSx for Windows File Server" ], "correct": "C. Amazon FSx for Lustre", "explanation": "Explanation Amazon FSx for Lustre provides a high-performance f ile system optimized for fast processing of workloads such as machine learning, high performanc e computing (HPC), video processing, financial modeling, and electronic design automation (EDA). T hese workloads commonly require data to be presented via a fast and scalable file system inter face, and typically have data sets stored on long-t erm data stores like Amazon S3. Operating high-performance file systems typically r equire specialized expertise and administrative overhead, requiring you to provision storage server s and tune complex performance parameters. With Amazon FSx, you can launch and run a file system th at provides sub-millisecond access to your data and allows you to read and write data at speeds of up t o hundreds of gigabytes per second of throughput an d millions of IOPS. Amazon FSx for Lustre works natively with Amazon S3 , making it easy for you to process cloud data sets with high-performance file systems. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allo ws you to write results back to S3. You can also us e FSx for Lustre as a standalone high-performance file sy stem to burst your workloads from on-premises to th e cloud. By copying on-premises data to an FSx for Lu stre file system, you can make that data available for fast processing by compute instances running on AWS . With Amazon FSx, you pay for only the resources you use. There are no minimum commitments, upfront hardware or software costs, or additional fees. 77 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for \"lift-and-shift\" busi ness-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Li nux instances via the SMB protocol. If you have Linux-b ased applications, Amazon EFS is a cloud-native ful ly managed file system that provides simple, scalable, elastic file storage accessible from Linux instanc es via the NFS protocol. For compute-intensive and fast processing workloads , like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that's optimized fo r performance, with input and output stored on Amazon S3. Hence, the correct answer is: Amazon FSx for Lustre . Amazon Elastic File System (EFS) is incorrect becau se although the EFS service can be used for HPC applications, it doesn't natively work with Amazon S3. It doesn't have the capability to easily proces s your S3 data with a high-performance POSIX interface, un like Amazon FSx for Lustre. Amazon FSx for Windows File Server is incorrect bec ause although this service is a type of Amazon FSx,it does not work natively with Amazon S3. This serv ice is a fully managed native Microsoft Windows fil e system that is primarily used for your Windows-base d applications that require shared file storage to AWS. Amazon Elastic Block Storage (EBS) is incorrect bec ause this service is not a scalable, high-performan ce file system. References: https://aws.amazon.com/fsx/lustre/ https://aws.amazon.com/getting-started/use-cases/hp c/3/ Check out this Amazon FSx Cheat Sheet: 78 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://tutorialsdojo.com/amazon-fsx/", "references": "" }, { "question": ": A Solutions Architect created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to Amazon S3, DynamoDB, Lambda, and other AWS resources of the company's cloud infrastructure. Which of the following must be done to allow the us er to make API calls to the AWS resources?", "options": [ "A. Do nothing as the IAM User is already capable of sending API calls to your AWS resources.", "B. Enable Multi-Factor Authentication for the use r.", "C. Create a set of Access Keys for the user and a ttach the necessary permissions.", "D. Assign an IAM Policy to the user to allow it t o send API calls." ], "correct": "C. Create a set of Access Keys for the user and a ttach the necessary permissions.", "explanation": "Explanation You can choose the credentials that are right for y our IAM user. When you use the AWS Management Console to create a user, you must choose to at lea st include a console password or access keys. By de fault, a brand new IAM user created using the AWS CLI or A WS API has no credentials of any kind. You must create the type of credentials for an IAM user base d on the needs of your user. Access keys are long-term credentials for an IAM us er or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI o r AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or dire ct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret acce ss keys) for IAM users. When you create an access key, IAM r eturns the access key ID and secret access key. You should save these in a secure location and give the m to the user. 79 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Do nothing as the IAM User is already capable of sending API calls to your AWS resources is incorrect because by default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. Take note that in t he scenario, you created the new IAM user using the AWS CLI and not via the AWS Management Console, whe re you must choose to at least include a console password or access keys when creating a new IAM use r. Enabling Multi-Factor Authentication for the user i s incorrect because this will still not provide the required Access Keys needed to send API calls to yo ur AWS resources. You have to grant the IAM user with Access Keys to meet the requirement. Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IA M policy to the new user will not grant the needed Ac cess Keys needed to make API calls to the AWS resources. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id _credentials_access-keys.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id _users.html#id_users_creds Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/", "references": "" }, { "question": ": A company plans to implement a network monitoring s ystem in AWS. The Solutions Architect launched an EC2 instance to host the monitoring system and used CloudWatch to monitor, store, and access the log f iles of the instance. 80 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Which of the following provides an automated way to send log data to CloudWatch Logs from the Amazon EC2 instance?", "options": [ "A. AWS Transfer for SFTP", "B. CloudTrail with log file validation", "C. CloudWatch Logs agent", "D. CloudTrail Processing Library" ], "correct": "C. CloudWatch Logs agent", "explanation": "Explanation CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search the m for specific error codes or patterns, filter them based on specific fields, or archive them securely for f uture analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create c ustom computations with a powerful query language, and visualize log data in dashboards. The CloudWatch Logs agent is comprised of the follo wing components: 81 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam - A plug-in to the AWS CLI that pushes log data to CloudWatch Logs. - A script (daemon) that initiates the process to p ush data to CloudWatch Logs. - A cron job that ensures that the daemon is always running. CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances hence, CloudWatch Logs agent is the c orrect answer. CloudTrail with log file validation is incorrect as this is mainly used for tracking the API calls of your AWS resources and not for sending EC2 logs to Cloud Watch. AWS Transfer for SFTP is incorrect as this is only a fully managed SFTP service for Amazon S3 used for tracking the traffic coming into the VPC and not fo r EC2 instance monitoring. This service enables you to easily move your file transfer workloads that use the Secure Sh ell File Transfer Protocol (SFTP) to AWS without needing to modify your applications or manage any S FTP servers. This can't be used to send log data fr om your EC2 instance to Amazon CloudWatch. CloudTrail Processing Library is incorrect because this is just a Java library that provides an easy w ay to process AWS CloudTrail logs. It cannot send your lo g data to CloudWatch Logs. References: https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/WhatIsCloudWatchLogs.html https://docs.aws.amazon.com/AmazonCloudWatch/latest /logs/AgentReference.html Check out this Amazon CloudWatch Cheat Sheet: https://tutorialsdojo.com/amazon-cloudwatch/", "references": "" }, { "question": ": A cryptocurrency company wants to go global with it s international money transfer app. Your project is to make sure that the database of the app is highly av ailable in multiple regions. What are the benefits of adding Multi-AZ deployment s in Amazon RDS? (Select TWO.)", "options": [ "A. Provides SQL optimization.", "B. Increased database availability in the case of system upgrades like OS patching or DB", "C. Provides enhanced database durability in the e vent of a DB instance component failure or", "D. Significantly increases the database performan ce." ], "correct": "", "explanation": "Explanation Amazon RDS Multi-AZ deployments provide enhanced av ailability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a pri mary DB Instance and synchronously replicates the data to a standby instance in a different Availabil ity Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engine ered to be highly reliable. In case of an infrastructure failure, Amazon RDS pe rforms an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your D B Instance remains the same after a failover, your application can resume database operation without t he need for manual administrative intervention. 83 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The chief benefits of running your DB instance as a Multi-AZ deployment are enhanced database durabili ty and availability. The increased availability and fault tolerance offered by Multi-AZ deployments make them a natural fit for production environments. Hence, the correct answers are the following option s: - Increased database availability in the case of sy stem upgrades like OS patching or DB Instance scaling. - Provides enhanced database durability in the even t of a DB instance component failure or an Availability Zone outage. The option that says: Creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone ( AZ) in a different region is almost correct. RDS 84 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam synchronously replicates the data to a standby inst ance in a different Availability Zone (AZ) that is in the same region and not in a different one. The options that say: Significantly increases the d atabase performance and Provides SQL optimization are incorrect as it does not affect the performance nor provide SQL optimization. References: https://aws.amazon.com/rds/details/multi-az/ https://aws.amazon.com/rds/faqs/ Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/", "references": "" }, { "question": ": A company has an application architecture that stor es both the access key ID and the secret access key in a plain text file on a custom Amazon Machine Image (A MI). The EC2 instances, which are created by using this AMI, are using the stored access keys to conne ct to a DynamoDB table. What should the Solutions Architect do to make the current architecture more secure?", "options": [ "A. Do nothing. The architecture is already secure because the access keys are already in the", "B. Remove the stored access keys in the AMI. Crea te a new IAM role with permissions to", "C. Put the access keys in an Amazon S3 bucket ins tead.", "D. Put the access keys in Amazon Glacier instead. Correct Answer: B", "A. You should restart the EC2 instance since ther e might be some issue with the instance", "B. Adjust the security group to allow inbound tra ffic on port 3389 from the company's IP address.", "C. Adjust the security group to allow inbound tra ffic on port 22 from the company's IP", "D. You should create a new instance since there m ight be some issue with the instance" ], "correct": "B. Adjust the security group to allow inbound tra ffic on port 3389 from the company's IP address.", "explanation": "Explanation 86 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Since you are using a Remote Desktop connection to access your EC2 instance, you have to ensure that t he Remote Desktop Protocol is allowed in the security group. By default, the server listens on TCP port 3 389 and UDP port 3389. Hence, the correct answer is: Adjust the security g roup to allow inbound traffic on port 3389 from the company's IP address. The option that says: Adjust the security group to allow inbound traffic on port 22 from the company's IP address is incorrect as port 22 is use d for SSH connections and not for RDP. The options that say: You should restart the EC2 in stance since there might be some issue with the instance and You should create a new instance since there might be some issue with the instance are incorrect as the EC2 instance is newly created and hence, unlikely to cause the issue. You have to che ck the security group first if it allows the Remote Deskto p Protocol (3389) before investigating if there is indeed an issue on the specific instance.", "references": "https://docs.aws.amazon.com/AWSEC2/latest/WindowsGu ide/troubleshooting-windows- instances.html#rdp-issues https://docs.aws.amazon.com/vpc/latest/userguide/VP C_SecurityGroups.html Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/" }, { "question": ": A Solutions Architect is working for a weather stat ion in Asia with a weather monitoring system that n eeds to be migrated to AWS. Since the monitoring system requires a low network latency and high network throughput, the Architect decided to launch the EC2 instances to a new cluster placement group. The system was working fine for a couple of weeks, howe ver, when they try to add new instances to the placement group that already has running EC2 instan ces, they receive an 'insufficient capacity error'. How will the Architect fix this issue?", "options": [ "A. Stop and restart the instances in the Placemen t Group and then try the launch again.", "B. Verify all running instances are of the same s ize and type and then try the launch again.", "C. Create another Placement Group and launch the new instances in the new group.", "D. Submit a capacity increase request to AWS as y ou are initially limited to only 12 instances" ], "correct": "A. Stop and restart the instances in the Placemen t Group and then try the launch again.", "explanation": "Explanation A cluster placement group is a logical grouping of instances within a single Availability Zone. A clus ter placement group can span peered VPCs in the same Re gion. Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network. It is recommended that you launch the number of ins tances that you need in the placement group in a si ngle launch request and that you use the same instance t ype for all instances in the placement group. If yo u try to add more instances to the placement group later, or if you try to launch more than one instance type i n the placement group, you increase your chances of getti ng an insufficient capacity error. If you stop an i nstance 88 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam in a placement group and then start it again, it st ill runs in the placement group. However, the start fails if there isn't enough capacity for the instance. If you receive a capacity error when launching an i nstance in a placement group that already has runni ng instances, stop and start all of the instances in t he placement group, and try the launch again. Resta rting the instances may migrate them to hardware that has cap acity for all the requested instances. Stop and restart the instances in the Placement gro up and then try the launch again can resolve this i ssue. If the instances are stopped and restarted, AWS may mo ve the instances to a hardware that has the capacit y for all the requested instances. Hence, the correct answer is: Stop and restart the instances in the Placement Group and then try the launch again. The option that says: Create another Placement Grou p and launch the new instances in the new group is incorrect because to benefit from the enhanced n etworking, all the instances should be in the same Placement Group. Launching the new ones in a new Pl acement Group will not work in this case. The option that says: Verify all running instances are of the same size and type and then try the laun ch again is incorrect because the capacity error is no t related to the instance size. The option that says: Submit a capacity increase re quest to AWS as you are initially limited to only 1 2 instances per Placement Group is incorrect because there is no such limit on the number of instances i n a Placement Group. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /placement-groups.html#placement-groups- cluster http://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGu ide/troubleshooting-launch.html#troubleshooting- launch-capacity Check out this Amazon EC2 Cheat Sheet: https://tutorialsdojo.com/amazon-elastic-compute-cl oud-amazon-ec2/", "references": "" }, { "question": ": There is a technical requirement by a financial fir m that does online credit card processing to have a secure application environment on AWS. They are try ing to decide on whether to use KMS or CloudHSM. Which of the following statements is righ t when it comes to Cloud HSM and KMS? 89 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "options": [ "A. If you want a managed service for creating and controlling your encryption keys but don't", "B. AWS Cloud HSM should always be used for any pa yment transactions.", "C. You should consider using AWS Cloud HSM over A WS KMS if you require your keys" ], "correct": "", "explanation": "Explanation AWS Key Management Service (AWS KMS) is a managed s ervice that makes it easy for you to create and control the encryption keys used to encrypt you r data. The master keys that you create in AWS KMS are protected by FIPS 140-2 validated cryptographic modules. AWS KMS is integrated with most other AWS services that encrypt your data with encryption keys that you manage. AWS KMS is also integrated with AWS CloudTrail to provide encryption key usage logs to help meet your auditing, regulatory and compliance needs. By using AWS KMS, you gain more control over access to data you encrypt. You can use the key management and cryptographic features directly in y our applications or through AWS services that are integrated with AWS KMS. Whether you are writing ap plications for AWS or using AWS services, AWS KMS enables you to maintain control over who can us e your customer master keys and gain access to your encrypted data. AWS KMS is integrated with AWS Clou dTrail, a service that delivers log files to an Amazon S3 bucket that you designate. By using Cloud Trail you can monitor and investigate how and when your master keys have been used and by whom. 90 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam If you want a managed service for creating and cont rolling your encryption keys, but you don't want or need to operate your own HSM, consider using AWS Ke y Management Service. Hence, the correct answer is: You should consider u sing AWS CloudHSM over AWS KMS if you require your keys stored in dedicated, third-party validate d hardware security modules under your exclusive control. The option that says: No major difference. They bot h do the same thing is incorrect because KMS and CloudHSM are two different services. If you want a managed service for creating and controlling your encryption keys, without operating your own HSM, yo u have to consider using AWS Key Management Service. The option that says: If you want a managed service for creating and controlling your encryption keys, but you don't want or need to operate your own HSM, consider using AWS CloudHSM is incorrect because you have to consider using AWS KMS if you w ant a managed service for creating and controlling your encryption keys, without operating your own HS M. The option that says: AWS CloudHSM should always be used for any payment transactions is incorrect because this is not always the case. AWS CloudHSM i s a cloud-based hardware security module (HSM) that enables you to easily generate and use your ow n encryption keys on the AWS Cloud. References: https://docs.aws.amazon.com/kms/latest/developergui de/overview.html https://docs.aws.amazon.com/kms/latest/developergui de/concepts.html#data-keys https://docs.aws.amazon.com/cloudhsm/latest/usergui de/introduction.html Check out this AWS Key Management Service Cheat She et: https://tutorialsdojo.com/aws-key-management-servic e-aws-kms/", "references": "" }, { "question": ": A manufacturing company launched a new type of IoT sensor. The sensor will be used to collect large streams of data records. You need to create a solut ion that can ingest and analyze the data in real-ti me with millisecond response times. Which of the following is the best option that you should implement in this scenario?", "options": [ "A. Ingest the data using Amazon Simple Queue Serv ice and create an AWS Lambda function", "B. Ingest the data using Amazon Kinesis Data Fire hose and create an AWS Lambda function", "C. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function", "D. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function" ], "correct": "D. Ingest the data using Amazon Kinesis Data Stre ams and create an AWS Lambda function", "explanation": "Explanation Amazon Kinesis Data Streams enables you to build cu stom applications that process or analyze streaming data for specialized needs. You can continuously ad d various types of data such as clickstreams, appli cation logs, and social media to an Amazon Kinesis data st ream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream. certkingd certkingd certkingd Based on the given scenario, the key points are \"in gest and analyze the data in real-time\" and \"millis econd response times\". For the first key point and based on the given options, you can use Amazon Kinesis Da ta Streams because it can collect and process large st reams of data records in real-time. For the second key point, you should use Amazon DynamoDB since it supp orts single-digit millisecond response times at any scale. certkingd Hence, the correct answer is: Ingest the data using Amazon Kinesis Data Streams and create an AWS Lambda function to store the data in Amazon DynamoD B. 92 of 124 certkingd AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Ingest the data using Amazon Kinesis Data Streams and create an AWS Lambda function to store the data in Amazon Redshift is in correct because Amazon Redshift only delivers sub- second response times. Take note that as per the sc enario, the solution must have millisecond response times to meet the requirements. Amazon DynamoDB Acc elerator (DAX), which is a fully managed, highly available, in-memory cache for Amazon DynamoDB, can deliver microsecond response times. The option that says: Ingest the data using Amazon Kinesis Data Firehose and create an AWS Lambda function to store the data in Amazon DynamoD B is incorrect. Amazon Kinesis Data Firehose only supports Amazon S3, Amazon Redshift, Amazon El asticsearch, and an HTTP endpoint as the destination. The option that says: Ingest the data using Amazon Simple Queue Service and create an AWS Lambda function to store the data in Amazon Redshif t is incorrect because Amazon SQS can't analyze data in real-time. You have to use an Amazon Kinesi s Data Stream to process the data in near-real-time . References: https://aws.amazon.com/kinesis/data-streams/faqs/ certkingd https://aws.amazon.com/dynamodb/ Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/ certkingd", "references": "" }, { "question": ": A software development company has hundreds of Amaz on EC2 instances with multiple Application Load Balancers (ALBs) across multiple AWS Regions. The p ublic applications hosted in their EC2 instances ar e accessed on their on-premises network. The company needs to reduce the number of IP addresses that it needs to regularly whitelist on the corporate firew all device. certkingd Which of the following approach can be used to fulf ill this requirement?", "options": [ "A. Create a new Lambda function that tracks the c hanges in the IP addresses of all ALBs across multiple AWS Regions. Schedule the function to run and update the corporate firewall", "B. Use AWS Global Accelerator and create an endpo int group for each AWS Region. Associate", "C. Use AWS Global Accelerator and create multiple endpoints for all the available AWS", "D. Launch a Network Load Balancer with an associa ted Elastic IP address. Set the ALBs in" ], "correct": "B. Use AWS Global Accelerator and create an endpo int group for each AWS Region. Associate", "explanation": "Explanation AWS Global Accelerator is a service that improves t he availability and performance of your application s with local or global users. It provides static IP a ddresses that act as a fixed entry point to your ap plication endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances. When the application usage grows, the number of IP addresses and endpoints that you need to manage als o increase. AWS Global Accelerator allows you to scal e your network up or down. AWS Global Accelerator lets you associate regional resources, such as load balancers and EC2 instances, to two static IP addr esses. You only whitelist these addresses once in your cli ent applications, firewalls, and DNS records. certkingd certkingd certkingd With AWS Global Accelerator, you can add or remove endpoints in the AWS Regions, run blue/green deployment, and A/B test without needing to update the IP addresses in your client applications. This is particularly useful for IoT, retail, media, automot ive, and healthcare use cases in which client appli cations cannot be updated frequently. certkingd 94 of 124 certkingd AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam If you have multiple resources in multiple regions, you can use AWS Global Accelerator to reduce the number of IP addresses. By creating an endpoint gro up, you can add all of your EC2 instances from a single region in that group. You can add additional endpoint groups for instances in other regions. Af ter it, you can then associate the appropriate ALB endpoints to each of your endpoint groups. The created accelerator would have two static IP addresses that you can use to create a security rule in your fire wall device. Instead of regularly adding the Amazon EC2 IP addre sses in your firewall, you can use the static IP ad dresses of AWS Global Accelerator to automate the process a nd eliminate this repetitive task. Hence, the correct answer is: Use AWS Global Accele rator and create an endpoint group for each AWS Region. Associate the Application Load Balancer from each region to the corresponding endpoint group. The option that says: Use AWS Global Accelerator an d create multiple endpoints for all the available AWS Regions. Associate all the private IP addresses of the EC2 instances to the corresponding endpoints is incorrect. It is better to create one endpoint group instead of multiple endpoints. Moreo ver, you have to associate the ALBs in AWS Global Accele rator and not the underlying EC2 instances. The option that says: Create a new Lambda function that tracks the changes in the IP addresses of all ALBs across multiple AWS Regions. Schedule the func tion to run and update the corporate firewall certkingd every hour using Amazon CloudWatch Events is incorr ect because this approach entails a lot of administrative overhead and takes a significant amo unt of time to implement. Using a custom Lambda function is actually not necessary since you can si mply use AWS Global Accelerator to achieve this requirement. The option that says: Launch a Network Load Balance r with an associated Elastic IP address. Set the ALBs in multiple Regions as targets is incorrect. A lthough you can allocate an Elastic IP address to y our certkingd ELB, it is not suitable to route traffic to your AL Bs across multiple Regions. You have to use AWS Glo bal Accelerator instead. References: https://docs.aws.amazon.com/global-accelerator/late st/dg/about-endpoint-groups.html https://aws.amazon.com/global-accelerator/faqs/ certkingd https://docs.aws.amazon.com/global-accelerator/late st/dg/introduction-how-it-works.html Check out this AWS Global Accelerator Cheat Sheet: https://tutorialsdojo.com/aws-global-accelerator/ certkingd", "references": "" }, { "question": ": 95 of 124 certkingd AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam A Solutions Architect needs to create a publicly ac cessible EC2 instance by using an Elastic IP (EIP) address and generate a report on how much it will c ost to use that EIP. Which of the following statements is correct regard ing the pricing of EIP?", "options": [ "A. There is no cost if the instance is running an d it has only one associated EIP.", "B. There is no cost if the instance is terminated and it has only one associated EIP.", "C. There is no cost if the instance is running an d it has at least two associated EIP.", "D. There is no cost if the instance is stopped an d it has only one associated EIP." ], "correct": "A. There is no cost if the instance is running an d it has only one associated EIP.", "explanation": "Explanation An Elastic IP address doesn't incur charges as long as the following conditions are true: - The Elastic IP address is associated with an Amaz on EC2 instance. - The instance associated with the Elastic IP addre ss is running. certkingd certkingd - The instance has only one Elastic IP address atta ched to it. certkingd certkingd If you've stopped or terminated an EC2 instance wit h an associated Elastic IP address and you don't ne ed that Elastic IP address anymore, consider disassoci ating or releasing the Elastic IP address. certkingd certkingd", "references": "https://aws.amazon.com/premiumsupport/knowledge-cen ter/elastic-ip-charges/ Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: certkingd https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/ certkingd 96 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam" }, { "question": ": A startup is building a microservices architecture in which the software is composed of small independ ent services that communicate over well-defined APIs. I n building large-scale systems, fine-grained decoup ling of microservices is a recommended practice to implemen t. The decoupled services should scale horizontally from each other to improve scalability . What is the difference between Horizontal scaling a nd Vertical scaling?", "options": [ "A. Vertical scaling means running the same softwa re on a fully serverless architecture using", "B. Horizontal scaling means running the same soft ware on smaller containers such as Docker", "C. Horizontal scaling means running the same soft ware on bigger machines which is limited by", "D. Vertical scaling means running the same softwa re on bigger machines which is limited by" ], "correct": "D. Vertical scaling means running the same softwa re on bigger machines which is limited by", "explanation": "Explanation Vertical scaling means running the same software on bigger machines which is limited by the capacity o f the individual server. Horizontal scaling is adding more servers to the existing pool and doesn't run into certkingd limitations of individual servers. certkingd certkingd 97 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Fine-grained decoupling of microservices is a best practice for building large-scale systems. It's a prerequisite for performance optimization since it allows choosing the appropriate and optimal technol ogies for a specific service. Each service can be impleme nted with the appropriate programming languages and frameworks, leverage the optimal data persistence s olution, and be fine-tuned with the best performingservice configurations. Properly decoupled services can be scaled horizonta lly and independently from each other. Vertical sca ling, which is running the same software on bigger machin es, is limited by the capacity of individual server s and can incur downtime during the scaling process. Hori zontal scaling, which is adding more servers to the existing pool, is highly dynamic and doesn't run in to limitations of individual servers. The scaling p rocess can be completely automated. Furthermore, the resiliency of the application can be improved because failing components can be easil y and automatically replaced. Hence, the correct answ er is the option that says: Vertical scaling means running the same software on bigger machines which is limited by the capacity of the individual server. Horizontal scaling is adding more servers t o the existing pool and doesn't run into limitation s of individual servers. The option that says: Vertical scaling means runnin g the same software on a fully serverless architecture using Lambda. Horizontal scaling means adding more servers to the existing pool and it doesn't run into limitations of individual servers is incorrect because Vertical scaling is not about running the same software on a fully serverless arc hitecture. AWS Lambda is not required for scaling. 98 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam The option that says: Horizontal scaling means runn ing the same software on bigger machines which is limited by the capacity of individual servers. V ertical scaling is adding more servers to the exist ing pool and doesn't run into limitations of individual servers is incorrect because the definitions for the two co ncepts were switched. Vertical scaling means running the same s oftware on bigger machines which is limited by the capacity of the individual server . Horizontal scaling is adding more servers to the existing pool and doesn't run into limitations of individual serv ers. The option that says: Horizontal scaling means runn ing the same software on smaller containers such as Docker and Kubernetes using ECS or EKS. Vertical scaling is adding more servers to the existing pool and doesn't run into limitations of individual servers is incorrect because Horizontal scaling is not related to using ECS or EKS containers on a smaller instance.", "references": "https://docs.aws.amazon.com/aws-technical-content/l atest/microservices-on-aws/microservices-on- aws.pdf#page=8 Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide: https://tutorialsdojo.com/aws-certified-solutions-a rchitect-associate/" }, { "question": ": A new DevOps engineer has created a CloudFormation template for a web application and she raised a pull request in GIT for you to check a nd review. After checking the template, you immediately told her that the template will not wor k. Which of the following is the reason why this CloudFormation template will fail to deploy the sta ck? 1. 1. { 2. 2. \"AWSTemplateFormatVersion\":\"2010-09-09\", 3. 3. \"Parameters\":{ 4. 4. \"VPCId\":{ 5. 5. \"Type\":\"String\", 6. 6. \"Description\":\"manila\" 7. 7. }, 8. 8. \"SubnetId\":{ 9. 9. \"Type\":\"String\", 10.10. \"Description\":\"subnet-b46032ec\" 11.11. } 12.12. }, 13.13. \"Outputs\":{ 14.14. \"InstanceId\":{ 99 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam 15.15. \"Value\":{ 16.16. \"Ref\":\"manilaInstance\" 17.17. }, 18.18. \"Description\":\"Instance Id\" 19.19. } 20.20. } 21.21. }", "options": [ "A. The Resources section is missing.", "B. The Conditions section is missing.", "C. An invalid section named Parameters is present . This will cause the CloudFormation", "D. The value of the AWSTemplateFormatVersion is i ncorrect. It should be 2017-06-06." ], "correct": "A. The Resources section is missing.", "explanation": "Explanation In CloudFormation, a template is a JSON or a YAML-f ormatted text file that describes your AWS infrastructure. Templates include several major sec tions. The Resources section is the only required s ection. Some sections in a template can be in any order. Ho wever, as you build your template, it might be help ful to use the logical ordering of the following list, as values in one section might refer to values from a previous section. Take note that all of the section s here are optional, except for Resources, which is the only one required. - Format Version - Description - Metadata - Parameters - Mappings - Conditions - Transform - Resources (required) - Outputs 100 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "references": "http://docs.aws.amazon.com/AWSCloudFormation/latest /UserGuide/template-anatomy.html Check out this AWS CloudFormation Cheat Sheet: https://tutorialsdojo.com/aws-cloudformation/ AWS CloudFormation - Templates, Stacks, Change Sets : https://www.youtube.com/watch?v=9Xpuprxg7aY" }, { "question": ": An online shopping platform has been deployed to AW S using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk au tomatically handles the details of capacity provisi oning, load balancing, scaling, and application health mon itoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platfo rm. In Elastic Beanstalk, where does it store the appli cation files and server log files?", "options": [ "A. Application files are stored in S3. The server log files can only be stored in the attached EBS", "B. Application files are stored in S3. The server log files can also optionally be stored in S3 or", "C. Application files are stored in S3. The server log files can be stored directly in Glacier or in", "D. Application files are stored in S3. The server log files can be optionally stored in CloudTrail" ], "correct": "B. Application files are stored in S3. The server log files can also optionally be stored in S3 or", "explanation": "Explanation AWS Elastic Beanstalk stores your application files and optionally, server log files in Amazon S3. If you are using the AWS Management Console, the AWS Toolk it for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account and the files you upload will be automatically cop ied from your local client to Amazon S3. Optionally, yo u may configure Elastic Beanstalk to copy your serv er log files every hour to Amazon S3. You do this by e diting the environment configuration settings. Thus, the correct answer is the option that says: A pplication files are stored in S3. The server log f iles can also optionally be stored in S3 or in CloudWatc h Logs. 101 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam With CloudWatch Logs, you can monitor and archive y our Elastic Beanstalk application, system, and custom log files from Amazon EC2 instances of your environments. You can also configure alarms that make it easier for you to react to specific log str eam events that your metric filters extract. The Cl oudWatch Logs agent installed on each Amazon EC2 instance in your environment publishes metric data points to t he CloudWatch service for each log group you configure . Each log group applies its own filter patterns to determine what log stream events to send to CloudWa tch as data points. Log streams that belong to the same log group share the same retention, monitoring , and access control settings. You can configure El astic Beanstalk to automatically stream logs to the Cloud Watch service. The option that says: Application files are stored in S3. The server log files can only be stored in t he attached EBS volumes of the EC2 instances, which were launch ed by AWS Elastic Beanstalk is incorrect because the server log files can also be stored in either S3 or CloudWatch Logs, and not onl y on the EBS volumes of the EC2 instances which are laun ched by AWS Elastic Beanstalk. The option that says: Application files are stored in S3. The server log files can be stored directly in Glacier or in CloudWatch Logs is incorrect because the server log files can optionally be stored in ei ther S3 or CloudWatch Logs, but not directly to Glacier. You can create a lifecycle policy to the S3 bucket to store the server logs and archive it in Glacier, bu t there is no direct way of storing the server logs to Glacier using Elastic Beanstalk unless you do it programmat ically. The option that says: Application files are stored in S3. The server log files can be optionally store d in CloudTrail or in CloudWatch Logs is incorrect becau se the server log files can optionally be stored in either S3 or CloudWatch Logs, but not directly to C loudTrail as this service is primarily used for aud iting API calls.", "references": "https://aws.amazon.com/elasticbeanstalk/faqs/ AWS Elastic Beanstalk Overview: https://www.youtube.com/watch?v=rx7e7Fej1Oo Check out this AWS Elastic Beanstalk Cheat Sheet: https://tutorialsdojo.com/aws-elastic-beanstalk/" }, { "question": ": A Solutions Architect is trying to enable Cross-Reg ion Replication to an S3 bucket but this option is disabled. Which of the following options is a valid reason for this? 102 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "options": [ "A. In order to use the Cross-Region Replication f eature in S3, you need to first enable", "B. The Cross-Region Replication feature is only a vailable for Amazon S3 - One Zone-IA", "C. The Cross-Region Replication feature is only a vailable for Amazon S3 - Infrequent Access.", "D. This is a premium feature which is only for AW S Enterprise accounts." ], "correct": "A. In order to use the Cross-Region Replication f eature in S3, you need to first enable", "explanation": "Explanation To enable the cross-region replication feature in S 3, the following items should be met: The source and destination buckets must have versio ning enabled. The source and destination buckets must be in diffe rent AWS Regions. Amazon S3 must have permissions to replicate object s from that source bucket to the destination bucket on your behalf. The options that say: The Cross-Region Replication feature is only available for Amazon S3 - One Zone-IA and The Cross-Region Replication feature is only available for Amazon S3 - Infrequent Access are incorrect as this feature is available t o all types of S3 classes. The option that says: This is a premium feature whi ch is only for AWS Enterprise accounts is incorrect as this CRR feature is available to all Support Pla ns. References: 103 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://docs.aws.amazon.com/AmazonS3/latest/dev/crr .html https://aws.amazon.com/blogs/aws/new-cross-region-r eplication-for-amazon-s3/ Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/", "references": "" }, { "question": ": An online stock trading system is hosted in AWS and uses an Auto Scaling group of EC2 instances, an RDS database, and an Amazon ElastiCache for Redis. You need to improve the data security of your in- memory data store by requiring the user to enter a password before they are granted permission to exec ute Redis commands. Which of the following should you do to meet the ab ove requirement?", "options": [ "A. Do nothing. This feature is already enabled by default.", "B. Enable the in-transit encryption for Redis rep lication groups.", "C. Create a new Redis replication group and set t he AtRestEncryptionEnabled parameter to", "D. None of the above." ], "correct": "", "explanation": "Explanation Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis comman ds on a password-protected Redis server. Hence, the correct answer is to authenticate the us ers using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled. To require that users enter a password on a passwor d-protected Redis server, include the parameter --a uth- token with the correct password when you create you r replication group or cluster and on all subsequen t commands to the replication group or cluster. 104 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Enabling the in-transit encryption for Redis replic ation groups is incorrect because although in-trans it encryption is part of the solution, it is missing t he most important thing which is the Redis AUTH opt ion. Creating a new Redis replication group and setting the AtRestEncryptionEnabled parameter to true is incorrect because the Redis At-Rest Encryption f eature only secures the data inside the in-memory d ata store. You have to use Redis AUTH option instead. The option that says: Do nothing. This feature is a lready enabled by default is incorrect because the Redis AUTH option is disabled by default. References: https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/auth.html https://docs.aws.amazon.com/AmazonElastiCache/lates t/red-ug/encryption.html Check out this Amazon ElastiCache Cheat Sheet: https://tutorialsdojo.com/amazon-elasticache/ Redis Append-Only Files vs Redis Replication: https://tutorialsdojo.com/redis-append-only-files-v s-redis-replication/ 105 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/", "references": "" }, { "question": ": A mobile application stores pictures in Amazon Simp le Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for thi s scenario? A. SAML-based Identity Federation", "options": [ "B. Web Identity Federation", "C. Cross-Account Access", "D. AWS Identity and Access Management roles" ], "correct": "B. Web Identity Federation", "explanation": "Explanation With web identity federation, you don't need to cre ate custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) --such a s Login with Amazon, Facebook, Google, or any other O penID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources i n your AWS account. Using an IdP helps you keep you r AWS account secure because you don't have to embed and distribute long-term security credentials with your application.", "references": "http://docs.aws.amazon.com/IAM/latest/UserGuide/id_ roles_providers_oidc.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" }, { "question": ": A web application, which is hosted in your on-premi ses data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved secur ity, you have to use the profile credentials specif ic to your EC2 instance to access your database, instead of a password. 106 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Which of the following should you do to meet the ab ove requirement?", "options": [ "A. Launch a new RDS database instance with the Ba cktrack feature enabled.", "B. Set up an RDS database and enable the IAM DB A uthentication.", "C. Configure your RDS database to enable encrypti on.", "D. Launch the mysql client using the --ssl-ca par ameter when connecting to the database." ], "correct": "B. Set up an RDS database and enable the IAM DB A uthentication.", "explanation": "Explanation You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works w ith MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you c onnect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of chara cters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signa ture Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials i n the database, because authentication is managed externally using IAM. You can also still use standa rd database authentication. IAM database authentication provides the following benefits: - Network traffic to and from the database is encry pted using Secure Sockets Layer (SSL). - You can use IAM to centrally manage access to you r database resources, instead of managing access individually on each DB instance. - For applications running on Amazon EC2, you can u se profile credentials specific to your EC2 instanc e to access your database instead of a password, for gre ater security Hence, setting up an RDS database and enable the IA M DB Authentication is the correct answer based on the above reference. Launching a new RDS database instance with the Back track feature enabled is incorrect because the Backtrack feature simply \"rewinds\" the DB cluster t o the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time. However, you can easily undo mistakes using the backtrack feature if you mistakenly perform a destructive action, such as a DELETE without a WHERE clause. 107 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Configuring your RDS database to enable encryption is incorrect because this encryption feature in RDS is mainly for securing your Amazon RDS DB insta nces and snapshots at rest. The data that is encrypted at rest includes the underlying storage f or DB instances, its automated backups, Read Replic as, and snapshots. Launching the mysql client using the --ssl-ca param eter when connecting to the database is incorrect because even though using the --ssl-ca parameter ca n provide SSL connection to your database, you stil l need to use IAM database connection to use the prof ile credentials specific to your EC2 instance to ac cess your database instead of a password.", "references": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html Check out this Amazon RDS cheat sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/" }, { "question": ": A company has several unencrypted EBS snapshots in their VPC. The Solutions Architect must ensure that all of the new EBS volumes restored from the unencr ypted snapshots are automatically encrypted. What should be done to accomplish this requirement?", "options": [ "A. Enable the EBS Encryption By Default feature f or the AWS Region.", "B. Enable the EBS Encryption By Default feature f or specific EBS volumes.", "C. Launch new EBS volumes and encrypt them using an asymmetric customer master key", "D. Launch new EBS volumes and specify the symmetr ic customer master key (CMK) for" ], "correct": "A. Enable the EBS Encryption By Default feature f or the AWS Region.", "explanation": "Explanation You can configure your AWS account to enforce the e ncryption of the new EBS volumes and snapshot copies that you create. For example, Amazon EBS enc rypts the EBS volumes created when you launch an instance and the snapshots that you copy from an un encrypted snapshot. 108 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Encryption by default has no effect on existing EBS volumes or snapshots. The following are important considerations in EBS encryption: - Encryption by default is a Region-specific settin g. If you enable it for a Region, you cannot disabl e it for individual volumes or snapshots in that Region. - When you enable encryption by default, you can la unch an instance only if the instance type supports EBS encryption. - Amazon EBS does not support asymmetric CMKs. When migrating servers using AWS Server Migration S ervice (SMS), do not turn on encryption by default. If encryption by default is already on and you are experiencing delta replication failures, turn off e ncryption by default. Instead, enable AMI encryption when you create the replication job. You cannot change the CMK that is associated with a n existing snapshot or encrypted volume. However, you can associate a different CMK during a snapshot copy operation so that the resulting copied snapsh ot is encrypted by the new CMK. 109 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Although there is no direct way to encrypt an exist ing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. If you enabled encryption by default, Amazon EBS encrypts the resulting new volume or snapshot using your default key for EBS encryption. Even if you h ave not enabled encryption by default, you can enable encry ption when you create an individual volume or snapshot. Whether you enable encryption by default or in individual creation operations, you can overr ide the default key for EBS encryption and use symmetric cu stomer-managed CMK. Hence, the correct answer is: Enable the EBS Encryp tion By Default feature for the AWS Region. The option that says: Launch new EBS volumes and en crypt them using an asymmetric customer master key (CMK) is incorrect because Amazon EBS do es not support asymmetric CMKs. To encrypt an EBS snapshot, you need to use symmetric CMK. The option that says: Launch new EBS volumes and sp ecify the symmetric customer master key (CMK) for encryption is incorrect. Although this so lution will enable data encryption, this process is manual and can potentially cause some unencrypted E BS volumes to be launched. A better solution is to enable the EBS Encryption By Default feature. It is stated in the scenario that all of the new EBS vol umes restored from the unencrypted snapshots must be aut omatically encrypted. The option that says: Enable the EBS Encryption By Default feature for specific EBS volumes is incorrect because the Encryption By Default feature is a Region-specific setting and thus, you can't e nable it to selected EBS volumes only. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide /EBSEncryption.html#encryption-by-default https://docs.aws.amazon.com/kms/latest/developergui de/services-ebs.html Check out this Amazon EBS Cheat Sheet: https://tutorialsdojo.com/amazon-ebs/ Comparison of Amazon S3 vs Amazon EBS vs Amazon EFS : https://tutorialsdojo.com/amazon-s3-vs-ebs-vs-efs/", "references": "" }, { "question": ": An application is hosted in an Auto Scaling group o f EC2 instances and a Microsoft SQL Server on Amazon RDS. There is a requirement that all in-flig ht data between your web servers and RDS should be secured. 110 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Which of the following options is the MOST suitable solution that you should implement? (Select TWO.)", "options": [ "A. Force all connections to your DB instance to u se SSL by setting the rds.force_ssl parameter", "B. Download the Amazon RDS Root CA certificate. I mport the certificate to your servers and", "C. Enable the IAM DB authentication in RDS using the AWS Management Console.", "D. Configure the security groups of your EC2 inst ances and RDS to only allow traffic to and" ], "correct": "", "explanation": "Explanation You can use Secure Sockets Layer (SSL) to encrypt c onnections between your client applications and you r Amazon RDS DB instances running Microsoft SQL Serve r. SSL support is available in all AWS regions for all supported SQL Server editions. When you create an SQL Server DB instance, Amazon R DS creates an SSL certificate for it. The SSL certificate includes the DB instance endpoint as th e Common Name (CN) for the SSL certificate to guard against spoofing attacks. There are 2 ways to use SSL to connect to your SQL Server DB instance: - Force SSL for all connections -- this happens tra nsparently to the client, and the client doesn't ha ve to do any work to use SSL. - Encrypt specific connections -- this sets up an S SL connection from a specific client computer, and you must do work on the client to encrypt connections. You can force all connections to your DB instance t o use SSL, or you can encrypt connections from specific client computers only. To use SSL from a s pecific client, you must obtain certificates for th e client 111 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam computer, import certificates on the client compute r, and then encrypt the connections from the client computer. If you want to force SSL, use the rds.force_ssl par ameter. By default, the rds.force_ssl parameter is set to false. Set the rds.force_ssl parameter to true to f orce connections to use SSL. The rds.force_ssl para meter is static, so after you change the value, you must reb oot your DB instance for the change to take effect. Hence, the correct answers for this scenario are th e options that say: - Force all connections to your DB instance to use SSL by setting the rds.force_ssl parameter to true. Once done, reboot your DB instance. - Download the Amazon RDS Root CA certificate. Impo rt the certificate to your servers and configure your application to use SSL to encrypt th e connection to RDS. Specifying the TDE option in an RDS option group th at is associated with that DB instance to enable transparent data encryption (TDE) is incorrect beca use transparent data encryption (TDE) is primarily used to encrypt stored data on your DB instances ru nning Microsoft SQL Server, and not the data that a re in transit. Enabling the IAM DB authentication in RDS using the AWS Management Console is incorrect because IAM database authentication is only support ed in MySQL and PostgreSQL database engines. With IAM database authentication, you don't need to use a password when you connect to a DB instance but instead, you use an authentication token. Configuring the security groups of your EC2 instanc es and RDS to only allow traffic to and from port 443 is incorrect because it is not enough to d o this. You need to either force all connections to your DB instance to use SSL, or you can encrypt connecti ons from specific client computers, just as mention ed above. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/SQLServer.Concepts.General.SSL.Using.htm l https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/Appendix.SQLServer.Options.TDE.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGu ide/UsingWithRDS.IAMDBAuth.html Check out this Amazon RDS Cheat Sheet: https://tutorialsdojo.com/amazon-relational-databas e-service-amazon-rds/ 112 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "references": "" }, { "question": ": In a tech company that you are working for, there i s a requirement to allow one IAM user to modify the configuration of one of your Elastic Load Balancers (ELB) which is used in a specific project. Each developer in your company has an individual IAM use r and they usually move from one project to another . Which of the following would be the best way to all ow this access?", "options": [ "A. Provide the user temporary access to the root account for 8 hours only. Afterwards, change", "B. Create a new IAM user that has access to modif y the ELB. Delete that user when the work", "C. Open up the port that ELB uses in a security g roup and then give the user access to that", "D. Create a new IAM Role which will be assumed by the IAM user. Attach a policy allowing" ], "correct": "D. Create a new IAM Role which will be assumed by the IAM user. Attach a policy allowing", "explanation": "Explanation In this scenario, the best option is to use IAM Rol e to provide access. You can create a new IAM Role then associate it to the IAM user. Attach a policy allow ing access to modify the ELB and once it is done, remove the IAM role to the user. An IAM role is similar to a user in that it is an A WS identity with permission policies that determine what the identity can and cannot do in AWS. However, ins tead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Al so, a role does not have standard long-term credent ials (password or access keys) associated with it. Inste ad, if a user assumes a role, temporary security credentials are created dynamically and provided to the user. You can use roles to delegate access to users, appl ications, or services that don't normally have acce ss to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resourc es, but not want to embed AWS keys within the app (where they can be difficult to rotate and where us ers can potentially extract them). Sometimes you wa nt to give AWS access to users who already have identitie s defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to yo ur account to third parties so that they can perfor m an audit on your resources. 113 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam", "references": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id _roles_create_for-user.html Check out this AWS IAM Cheat Sheet: https://tutorialsdojo.com/aws-identity-and-access-m anagement-iam/" }, { "question": ": A startup is building an AI-based face recognition application in AWS, where they store millions of im ages in an S3 bucket. As the Solutions Architect, you ha ve to ensure that each and every image uploaded to their system is stored without any issues. What is the correct indication that an object was s uccessfully stored when you put objects in Amazon S 3?", "options": [ "A. You will receive an email from Amazon SNS info rming you that the object is successfully stored.", "B. Amazon S3 has 99.999999999% durability hence, there is no need to confirm that data was", "C. You will receive an SMS from Amazon SNS inform ing you that the object is successfully", "D. HTTP 200 result code and MD5 checksum." ], "correct": "D. HTTP 200 result code and MD5 checksum.", "explanation": "Explanation If you triggered an S3 API call and got HTTP 200 re sult code and MD5 checksum, then it is considered a s a successful upload. The S3 API will return an erro r code in case the upload is unsuccessful. The option that says: Amazon S3 has 99.999999999% d urability hence, there is no need to confirm that data was inserted is incorrect because althoug h S3 is durable, it is not an assurance that all ob jects uploaded using S3 API calls will be successful. The options that say: You will receive an SMS from Amazon SNS informing you that the object is successfully stored and You will receive an email f rom Amazon SNS informing you that the object is successfully stored are both incorrect because you don't receive an SMS nor an email notification by default, unless you added an event notification.", "references": "https://docs.aws.amazon.com/AmazonS3/latest/API/RES TObjectPOST.html 114 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Check out this Amazon S3 Cheat Sheet: https://tutorialsdojo.com/amazon-s3/" }, { "question": ": A company has a VPC for its human resource departme nt, and another VPC located in a different region for their finance department. The Solutions Archite ct must redesign the architecture to allow the fina nce department to access all resources that are in the human resource department, and vice versa. Which type of networking connection in AWS should t he Solutions Architect set up to satisfy the above requirement?", "options": [ "A. VPN Connection", "B. AWS Cloud Map", "C. VPC Endpoint", "D. Inter-Region VPC Peering" ], "correct": "D. Inter-Region VPC Peering", "explanation": "Explanation Amazon Virtual Private Cloud (Amazon VPC) offers a comprehensive set of virtual networking capabilities that provide AWS customers with many o ptions for designing and implementing networks on the AWS cloud. With Amazon VPC, you can provision l ogically isolated virtual networks to host your AWS resources. You can create multiple VPCs within the same region or in different regions, in the sam e account or in different accounts. This is useful fo r customers who require multiple VPCs for security, billing, regulatory, or other purposes, and want to integrate AWS resources between their VPCs more easily. More often than not, these different VPCs n eed to communicate privately and securely with one another for sharing data or applications. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connecti on between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Regio n. 115 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam AWS uses the existing infrastructure of a VPC to cr eate a VPC peering connection; it is neither a gate way nor a VPN connection and does not rely on a separat e piece of physical hardware. There is no single po int of failure for communication or a bandwidth bottlen eck. Hence, the correct answer is: Inter-Region VPC Peer ing. AWS Cloud Map is incorrect because this is simply a cloud resource discovery service. With Cloud Map, you can define custom names for your application re sources, and it maintains the updated location of t hese dynamically changing resources. This increases your application availability because your web service always disco vers the most up-to-date locations of its resources. VPN Connection is incorrect. This is technically po ssible, but since you already have 2 VPCs on AWS, i t is easier to set up a VPC peering connection. The b andwidth is also faster for VPC peering since the connection will be going through the AWS backbone n etwork instead of the public Internet when you use a VPN connection. VPC Endpoint is incorrect because this is primarily used to allow you to privately connect your VPC to supported AWS services and VPC endpoint services po wered by PrivateLink, but not to the other VPC itself. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-peering.html https://aws.amazon.com/answers/networking/aws-multi ple-region-multi-vpc-connectivity/ Check out these Amazon VPC and VPC Peering Cheat Sh eets: https://tutorialsdojo.com/amazon-vpc/ 116 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://tutorialsdojo.com/vpc-peering/", "references": "" }, { "question": ": A company plans to launch an application that track s the GPS coordinates of delivery trucks in the cou ntry. The coordinates are transmitted from each delivery truck every five seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. The aggregated data will be analyzed in a separate repo rting application. Which AWS service should you use for this scenario?", "options": [ "A. Amazon Simple Queue Service", "B. Amazon Kinesis", "C. Amazon AppStream", "D. AWS Data Pipeline" ], "correct": "B. Amazon Kinesis", "explanation": "Explanation Amazon Kinesis makes it easy to collect, process, a nd analyze real-time, streaming data so you can get timely insights and react quickly to new informatio n. It offers key capabilities to cost-effectively p rocess streaming data at any scale, along with the flexibi lity to choose the tools that best suit the require ments of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine le arning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and responds instantly instead of having to wait until all your data are collected before the p rocessing can begin.", "references": "https://aws.amazon.com/kinesis/ 117 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Check out this Amazon Kinesis Cheat Sheet: https://tutorialsdojo.com/amazon-kinesis/" }, { "question": ": A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engin eering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company. As the Solutions Architect, which is the most suita ble way to set up the multi-account AWS environment of the company?", "options": [ "A. Use AWS Organizations and Service Control Poli cies to control services on each account.", "B. Set up a common IAM policy that can be applied across all AWS accounts.", "C. Connect all departments by setting up a cross- account access to each of the AWS accounts", "D. Provide access to externally authenticated use rs via Identity Federation. Set up an IAM role" ], "correct": "A. Use AWS Organizations and Service Control Poli cies to control services on each account.", "explanation": "Explanation Using AWS Organizations and Service Control Policie s to control services on each account is the correct answer. Refer to the diagram below: 118 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam AWS Organizations offers policy-based management fo r multiple AWS accounts. With Organizations, you can create groups of accounts, automate account cre ation, apply and manage policies for those groups. Organizations enables you to centrally manage polic ies across multiple accounts, without requiring cus tom scripts and manual processes. It allows you to crea te Service Control Policies (SCPs) that centrally c ontrol AWS service use across multiple AWS accounts. Setting up a common IAM policy that can be applied across all AWS accounts is incorrect because it is not possible to create a common IAM policy for mult iple AWS accounts. The option that says: Connect all departments by se tting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM poli cies to your resources based on their respective departments to control access is incorrect because although you can set up cross-account access to eac h department, this entails a lot of configuration com pared with using AWS Organizations and Service Control Policies (SCPs). Cross-account access would be a more suitable choice if you only have two accounts to manage, but not for multiple accounts. The option that says: Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from e ach department whose identity is federated from your organization or a third-party identity provide r is incorrect as this option is focused on the Ide ntity Federation authentication set up for your AWS accou nts but not the IAM policy management for multiple AWS accounts. A combination of AWS Organizations an d Service Control Policies (SCPs) is a better choice compared to this option.", "references": "119 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://aws.amazon.com/organizations/ Check out this AWS Organizations Cheat Sheet: https://tutorialsdojo.com/aws-organizations/ Service Control Policies (SCP) vs IAM Policies: https://tutorialsdojo.com/service-control-policies- scp-vs-iam-policies/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/" }, { "question": ": A company deployed a fleet of Windows-based EC2 ins tances with IPv4 addresses launched in a private subnet. Several software installed in the EC2 insta nces are required to be updated via the Internet. Which of the following services can provide the fir m a highly available solution to safely allow the instances to fetch the software patches from the In ternet but prevent outside network from initiating a connection?", "options": [ "A. VPC Endpoint", "B. NAT Gateway", "C. NAT Instance", "D. Egress-Only Internet Gateway" ], "correct": "B. NAT Gateway", "explanation": "Explanation AWS offers two kinds of NAT devices -- a NAT gatewa y or a NAT instance. It is recommended to use NAT gateways, as they provide better availability a nd bandwidth over NAT instances. The NAT Gateway service is also a managed service that does not req uire your administration efforts. A NAT instance is launched from a NAT AMI. Just like a NAT instance, you can use a network add ress translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiat ing a connection with those instances. Here is a diagram showing the differences between N AT gateway and NAT instance: 120 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam Egress-Only Internet Gateway is incorrect because t his is primarily used for VPCs that use IPv6 to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those in stances, just like what NAT Instance and NAT Gatewa y do. The scenario explicitly says that the EC2 insta nces are using IPv4 addresses which is why Egress-o nly Internet gateway is invalid, even though it can pro vide the required high availability. 121 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam VPC Endpoint is incorrect because this simply enabl es you to privately connect your VPC to supported AWS services and VPC endpoint services powered by P rivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect c onnection. NAT Instance is incorrect because although this can also enable instances in a private subnet to conne ct to the Internet or other AWS services and prevent the Inte rnet from initiating a connection with those instances, it is not as highly available compared t o a NAT Gateway. References: https://docs.aws.amazon.com/AmazonVPC/latest/UserGu ide/vpc-nat-gateway.html https://docs.aws.amazon.com/vpc/latest/userguide/vp c-nat-comparison.html https://docs.aws.amazon.com/vpc/latest/userguide/eg ress-only-internet-gateway.html Check out this Amazon VPC Cheat Sheet: https://tutorialsdojo.com/amazon-vpc/", "references": "" }, { "question": ": A company developed a financial analytics web appli cation hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack . You want to easily port that web application to AWS Cloud which can automatically handle all the ta sks such as balancing load, auto-scaling, monitorin g, and placing your containers across your cluster. Which of the following services can be used to fulf ill this requirement?", "options": [ "A. OpsWorks", "B. ECS", "C. AWS Elastic Beanstalk", "D. AWS Code Deploy" ], "correct": "C. AWS Elastic Beanstalk", "explanation": "Explanation AWS Elastic Beanstalk supports the deployment of we b applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependenc ies (such as package managers or tools), that aren' t 122 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requi res to run. By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the deta ils of capacity provisioning, load balancing, scaling, and application health monitoring. You can manage your web application in an environment that supports the range of services that are integrated with Elastic Beanstalk, including but not limited to VPC, RDS, a nd IAM. Hence, the correct answer is: AWS Elastic Beanstalk . ECS is incorrect. Although it also provides Service Auto Scaling, Service Load Balancing, and Monitori ng with CloudWatch, these features are not automatical ly enabled by default unlike with Elastic Beanstalk . Take note that the scenario requires a service that will automatically handle all the tasks such as ba lancing load, auto-scaling, monitoring, and placing your co ntainers across your cluster. You will have to manu ally configure these things if you wish to use ECS. With Elastic Beanstalk, you can manage your web application in an environment that supports the ran ge of services easier. OpsWorks and AWS CodeDeploy are incorrect because t hese are primarily used for application deployment and configuration only, without providin g load balancing, auto-scaling, monitoring, or ECS cluster management.", "references": "https://docs.aws.amazon.com/elasticbeanstalk/latest /dg/create_deploy_docker.html Check out this AWS Elastic Beanstalk Cheat Sheet: 123 of 124 AWS Certified Solutions Architect Associate ( SAA-C 02 / SAA-C03 ) Exam https://tutorialsdojo.com/aws-elastic-beanstalk/ AWS Elastic Beanstalk Overview: https://www.youtube.com/watch?v=rx7e7Fej1Oo Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy: https://tutorialsdojo.com/elastic-beanstalk-vs-clou dformation-vs-opsworks-vs-codedeploy/ Comparison of AWS Services Cheat Sheets: https://tutorialsdojo.com/comparison-of-aws-service s/ 124 of 124" } ]