Aws-Architect

[vc_row][vc_column][vc_custom_heading text=”Aws – Architect – Interview Questions and Answers” font_container=”tag:h2|text_align:center” css=”.vc_custom_1561641203688{background-color: #5fa4e4 !important;}”][/vc_column][/vc_row][vc_row full_width=”stretch_row” css=”.vc_custom_1559286923229{background-color: #f6f6f7 !important;}”][vc_column width=”1/2″][vc_tta_accordion color=”peacoc” active_section=”1″][vc_tta_section title=” I have some private servers on my premises, also I have distributed some of my workload on the public cloud, what is this architecture called?” tab_id=”1559286383409-ab730398-6c03″][vc_column_text]Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both, the public cloud, and your on premises servers i.e the private cloud. To make this hybrid architecture easy to use, wouldn’t it be better if your private and public cloud were all on the same network(virtually). This is established by including your public cloud servers in a virtual private cloud, and connecting this virtual cloud with your on premise servers using a VPN(Virtual Private Network).[/vc_column_text][/vc_tta_section][vc_tta_section title=”You have a video trans-coding application. The videos are processed according to a queue. If the processing of a video is interrupted in one instance, it is resumed in another instance. Currently there is a huge back-log of videos which needs to be processed, for this you need to add more instances, but you need these instances only until your backlog is reduced. Which of these would be an efficient way to do it?” tab_id=”1559286522681-3bf94e12-e7b7″][vc_column_text]You should be using an On Demand instance for the same. Why? First of all, the workload has to be processed now, meaning it is urgent, secondly you don’t need them once your backlog is cleared, therefore Reserved Instance is out of the picture, and since the work is urgent, you cannot stop the work on your instance just because the spot price spiked, therefore Spot Instances shall also not be used. Hence On-Demand instances shall be the right choice in this case.[/vc_column_text][/vc_tta_section][vc_tta_section title=”You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost effective way.” tab_id=”1561382593569-b1979b66-b066″][vc_column_text]Explanation: Since the work we are addressing here is not continuous, a reserved instance shall be idle at times, same goes with On Demand instances. Also it does not make sense to launch an On Demand instance whenever work comes up, since it is expensive. Hence Spot Instances will be the right fit because of their low rates and no long term commitments.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How is stopping and terminating an instance different from each other?” tab_id=”1561382595833-dd54d407-26c0″][vc_column_text]Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail:

  • Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state.
  • Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”If I want my instance to run on a single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?” tab_id=”1561382597303-5168678c-55b9″][vc_column_text]Explanation: The Instance tenancy attribute should be set to Dedicated Instance. The rest of the values are invalid.[/vc_column_text][/vc_tta_section][vc_tta_section title=” When will you incur costs with an Elastic IP address (EIP)?” tab_id=”1561382598718-1fee5a6b-29dd”][vc_column_text]Explanation: You are not charged, if only one Elastic IP address is attached with your running instance. But you do get charged in the following conditions:

  • When you use more than one Elastic IPs with your instance.
  • When your Elastic IP is attached to a stopped instance.
  • When your Elastic IP is not attached to any instance.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”How is a Spot instance different from an On-Demand instance or Reserved Instance?” tab_id=”1561382602352-48d936eb-64df”][vc_column_text]First of all, let’s understand that Spot Instance, On-Demand instance and Reserved Instances are all models for pricing. Moving along, spot instances provide the ability for customers to purchase compute capacity with no upfront commitment, at hourly rates usually lower than the On-Demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price fluctuates based on supply and demand for instances, but customers will never pay more than the maximum price they have specified. If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2 instance will be shut down automatically. But the reverse is not true, if the Spot prices come down again, your EC2 instance will not be launched automatically, one has to do that manually.  In Spot and On demand instance, there is no commitment for the duration from the user side, however in reserved instances one has to stick to the time period that he has chosen.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Are the Reserved Instances available for Multi-AZ Deployments?” tab_id=”1561382603416-a2e0c7df-e6f8″][vc_column_text]Explanation: Reserved Instances is a pricing model, which is available for all instance types in EC2.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How to use the processor state control feature available on the c4.8xlarge instance?” tab_id=”1561382604362-41fc1dd4-d143″][vc_column_text]The processor state control consists of 2 states:

  • The C state – Sleep state varying from c0 to c6. C6 being the deepest sleep state for a processor
  • The P state – Performance state p0 being the highest and p15 being the lowest possible frequency.

Now, why the C state and P state. Processors have cores, these cores need thermal headroom to boost their performance. Now since all the cores are on the processor the temperature should be kept at an optimal state so that all the cores can perform at the highest performance.

Now how will these states help in that? If a core is put into sleep state it will reduce the overall temperature of the processor and hence other cores can perform better. Now the same can be  synchronized with other cores, so that the processor can boost as many cores it can by timely putting other cores to sleep, and thus get an overall performance boost.

Concluding, the C and P state can be customized in some EC2 instances like the c4.8xlarge instance and thus you can customize the processor according to your workload.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What kind of network performance parameters can you expect when you launch instances in cluster placement group?” tab_id=”1561382605426-bedbe54f-bb01″][vc_column_text]The network performance depends on the instance type and network performance specification, if launched in a placement group you can expect up to

  • 10 Gbps in a single-flow,
  • 20 Gbps in multiflow i.e full duplex
  • Network traffic outside the placement group will be limited to 5 Gbps(full duplex).

[/vc_column_text][/vc_tta_section][vc_tta_section title=”To deploy a 4 node cluster of Hadoop in AWS which instance type can be used?” tab_id=”1561382606689-6de5471f-2224″][vc_column_text]First let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master slave concept. The master machine processes all the data, slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended and since master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload. For e.g. – In this case c4.8xlarge will be preferred for master machine whereas for slave machine we can select i2.large instance. If you don’t want to deal with configuring your instance and installing hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, EMR picks it from there, processes it, and dumps it back into S3.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Where do you think an AMI fits, when you are designing an architecture for a solution?” tab_id=”1561382593038-8e7f1218-a7ac”][vc_column_text]AMIs(Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI which would help you save space on AWS. For example if you don’t need a set of software on your installation, you can customize your AMI to do that. This makes it cost efficient, since you are removing the unwanted things.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Is one Elastic IP address enough for every instance that I have running?” tab_id=”1561382592598-bfa5a635-55f4″][vc_column_text]Depends! Every instance comes with its own private and public address. The private address is associated exclusively with the instance and is returned  to Amazon EC2 only when it is stopped or terminated. Similarly, the public address is associated exclusively with the instance until it is stopped or terminated. However, this can be replaced by the Elastic IP address, which stays with the instance as long as the user doesn’t manually detach it. But what if you are hosting multiple websites on your EC2 server, in that case you may require more than one Elastic IP address.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What are the best practices for Security in Amazon EC2?” tab_id=”1561382592102-02f6742f-af9d”][vc_column_text]There are several best practices to secure Amazon EC2. A few of them are given below:

  • Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
  • Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  • Review the rules in your security groups regularly, and ensure that you apply the principle of least
  • Privilege – only open up permissions that you require.
  • Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.

[/vc_column_text][/vc_tta_section][vc_tta_section title=” You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?” tab_id=”1561382591605-a59a34b3-da74″][vc_column_text]Explanation: Rather than making changes to every object, its better to set the policy for the whole bucket. IAM is used to give more granular permissions, since this is a website, all objects would be public by default.[/vc_column_text][/vc_tta_section][vc_tta_section title=”A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named “company-backup”?” tab_id=”1561382591221-5a07bad8-cf1f”][vc_column_text]Explanation: Taking queue from the previous questions, this use case involves more granular permissions, hence IAM would be used here.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Can S3 be used with EC2 instances, if yes, how?” tab_id=”1561382590677-cc47cbcf-ee26″][vc_column_text]Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images (AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2.[/vc_column_text][/vc_tta_section][vc_tta_section title=”A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data?” tab_id=”1561382590095-0dd9d5a4-e5bf”][vc_column_text]Explanation: The fastest way to do it would be launching a new storage gateway instance. Why? Since time is the key factor which drives every business, troubleshooting this problem will take more time. Rather than we can just restore the previous working state of the storage gateway on a new instance.[/vc_column_text][/vc_tta_section][vc_tta_section title=”When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon S3 bucket, which method or service will you use?” tab_id=”1561382589718-2e1a6ceb-a239″][vc_column_text]Explanation: You would not use Snowball, because for now, the snowball service does not support cross region data transfer, and since, we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network upto 300% compared to normal data transfer speed.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How can you speed up data transfer in Snowball?” tab_id=”1561382589333-47659694-5e1f”][vc_column_text]The data transfer can be increased in the following way:

  • By performing multiple copy operations at one time i.e. if the workstation is powerful enough, you can initiate multiple cp commands each from different terminals, on the same Snowball device.
  • Copying from multiple workstations to the same snowball.
  • Transferring large files or by creating a batch of small file, this will reduce the encryption overhead.
  • Eliminating unnecessary hops i.e. make a setup where the source machine(s) and the snowball are the only machines active on the switch being used, this can hugely improve performance.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”f you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should:” tab_id=”1561382588926-dd3c13fa-09be”][vc_column_text]Explanation: The best way of connecting to your cloud resources (for ex- ec2 instances) from your own data center (for eg- private cloud) is a VPC. Once you connect your datacenter to the VPC in which your instances are present, each instance is assigned a private IP address which can be accessed from your datacenter. Hence, you can access your public cloud resources, as if they were on your own network.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Can I connect my corporate datacenter to the Amazon Cloud?” tab_id=”1561382588517-7fc83786-ee5a”][vc_column_text]Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?” tab_id=”1561382588060-9c018b10-4231″][vc_column_text]Primary private IP address is attached with the instance throughout its lifetime and cannot be changed, however secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Why do you make subnets?” tab_id=”1561382587557-eee3186e-5475″][vc_column_text]Explanation: If there is a network which has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Which of the following is true?” tab_id=”1561382583069-d03ea374-27b7″][vc_column_text]Explanation: Route Tables are used to route network packets, therefore in a subnet having multiple route tables will lead to confusion as to where the packet has to go. Therefore, there is only one route table in a subnet, and since a route table can have any no. of records or information, hence attaching multiple subnets to a route table is possible.[/vc_column_text][/vc_tta_section][/vc_tta_accordion][/vc_column][vc_column width=”1/2″][vc_tta_accordion color=”peacoc” active_section=”1″][vc_tta_section title=”What does the following command do with respect to the Amazon EC2 security groups?” tab_id=”1561382561432-7f73ef2a-cc67″][vc_column_text]Explanation: A Security group is just like a firewall, it controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic. The command mentioned is pretty straight forward, it says create security group, and does the same. Moving along, once your security group is created, you can add different rules in it. For example, you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want access the instance  in its security group.[/vc_column_text][/vc_tta_section][vc_tta_section title=”In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?” tab_id=”1561382561455-654071d3-eb53″][vc_column_text]Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be served from the cached edge.[/vc_column_text][/vc_tta_section][vc_tta_section title=”In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?” tab_id=”1561382611424-56181e07-6453″][vc_column_text]Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be served from the cached edge.[/vc_column_text][/vc_tta_section][vc_tta_section title=”If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?” tab_id=”1561382613753-7c9c9136-4ca1″][vc_column_text]Yes. Amazon CloudFront supports custom origins including origins from outside of AWS. With AWS Direct Connect, you will be charged with the respective data transfer rates.[/vc_column_text][/vc_tta_section][vc_tta_section title=” If my AWS Direct Connect fails, will I lose my connectivity?” tab_id=”1561382614729-6b63842b-62b1″][vc_column_text]If a backup AWS Direct connect has been configured, in the event of a failure it will switch over to the second one. It is recommended to enable Bidirectional Forwarding Detection (BFD) when configuring your connections to ensure faster detection and failover. On the other hand, if you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the backup VPN connection automatically. Traffic to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or a IPsec VPN link, then Amazon VPC traffic will be dropped in the event of a failure.[/vc_column_text][/vc_tta_section][vc_tta_section title=” If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?” tab_id=”1561382615672-42dd66a8-6425″][vc_column_text]xplanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.[/vc_column_text][/vc_tta_section][vc_tta_section title=”When would I prefer Provisioned IOPS over Standard RDS storage?” tab_id=”1561382616984-e392adb7-34cd”][vc_column_text]Explanation:  Provisioned IOPS deliver high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch oriented workload.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How is Amazon RDS, DynamoDB and Redshift different?” tab_id=”1561382618152-4fb6fc3a-9883″][vc_column_text]

  • Amazon RDS is a database management service for relational databases,  it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS  is a Db management service for structured data only.
  • DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
  • Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with primary DB instance?” tab_id=”1561382619930-3767e1f0-f3d6″][vc_column_text]Explanation: No, Standby DB instance cannot be used with primary DB instance in parallel, as the former is solely used for standby purposes, it cannot be used unless the primary instance goes down.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.” tab_id=”1561382620762-261798c4-b2da”][vc_column_text]The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements

Explanation: For this we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed  to an independent DB instance, but with a Db snapshot it becomes mandatory to launch a separate DB Instance.[/vc_column_text][/vc_tta_section][vc_tta_section title=” Can I run more than one DB instance for Amazon RDS for free?” tab_id=”1561382621738-44e10b8a-ca7d”][vc_column_text]Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Which AWS services will you use to collect and process e-commerce data for near real-time analysis?” tab_id=”1561382622978-99f18b2d-fe4d”][vc_column_text]Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, therefore can be fed any type of unstructured data, which can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce, since a near real time analyses is needed.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?” tab_id=”1561382623945-e109c4d3-ece0″][vc_column_text]Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.[/vc_column_text][/vc_tta_section][vc_tta_section title=”A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?” tab_id=”1561382625642-843354b1-1f56″][vc_column_text]Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What happens to my backups and DB Snapshots if I delete my DB Instance?” tab_id=”1561382626563-0fa3f678-a5bf”][vc_column_text]When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.[/vc_column_text][/vc_tta_section][vc_tta_section title=” Which of the following use cases are suitable for Amazon DynamoDB? ” tab_id=”1561382627658-bb4ddb6b-277e”][vc_column_text]Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?” tab_id=”1561382628592-dd61fe2e-19d3″][vc_column_text]You can load the data in the following two ways:

  • You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
  • AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?” tab_id=”1561382630393-e350687e-823e”][vc_column_text]Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.[/vc_column_text][/vc_tta_section][vc_tta_section title=” You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements?” tab_id=”1561382632977-3b296953-06eb”][vc_column_text]Explanation:  Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased and provisioned IO should be introduced to increase the performance.[/vc_column_text][/vc_tta_section][vc_tta_section title=”A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least 100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?” tab_id=”1561382634764-ad5070ba-db90″][vc_column_text]Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Suppose you have an application where you have to render images and also do some general computing. From the following services which service will best fit your need?” tab_id=”1561382636355-5dfe038c-83df”][vc_column_text]Explanation: You will choose an application load balancer, since it supports path based routing, which means it can take decisions based on the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing it will route it to a different instance.[/vc_column_text][/vc_tta_section][vc_tta_section title=” What is the difference between Scalability and Elasticity?” tab_id=”1561382638762-3bfa136c-84a1″][vc_column_text]Scalability is the ability of a system to increase its hardware resources to handle the increase in demand. It can be done by increasing the hardware specifications or increasing the processing nodes.

Elasticity is the ability of a system to handle increase in the workload by adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.[/vc_column_text][/vc_tta_section][vc_tta_section title=”How will you change the instance type for instances which are running in your application tier and are using Auto Scaling. Where will you change it from the following areas?” tab_id=”1561382649403-b75ce50f-0f5c”][vc_column_text]Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto scaling launch configuration.[/vc_column_text][/vc_tta_section][vc_tta_section title=”You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?” tab_id=”1561382651188-733a0ad7-4f7c”][vc_column_text]Explanation:Creating alone an autoscaling group will not solve the issue, until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data transfer tool therefore will not help reduce load on the EC2 instance. Similarly the other option – Launch configuration is a template for configuration which has no connection with reducing loads.[/vc_column_text][/vc_tta_section][vc_tta_section title=”When should I use a Classic Load Balancer and when should I use an Application load balancer?” tab_id=”1561382674843-1f0b68aa-358c”][vc_column_text]A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.[/vc_column_text][/vc_tta_section][/vc_tta_accordion][/vc_column][/vc_row][vc_row full_width=”stretch_row” overlay_color=”rgba(12,12,12,0.58)” css=”.vc_custom_1528340415800{padding-bottom: 104px !important;background-image: url(https://wordpresslms.thimpress.com/wp-content/uploads/sites/4/2017/06/layer-532.jpg?id=231) !important;background-position: center !important;background-repeat: no-repeat !important;background-size: cover !important;}” el_class=”overflow-visible become-teacher”][vc_column width=”1/2″][vc_column_text el_class=”align-right”]

Please login to send your request!
[/vc_column_text][/vc_column][vc_column width=”1/2″ el_class=”talk”][vc_column_text el_class=”thim-content-talk”]I’m a Copywriter in a Digital Agency, I was searching for courses that’ll help me broaden my skill set. Before signing up for Rob’s.[/vc_column_text][/vc_column][/vc_row]

WhatsApp us