Introduction to AWS Batch
Amazon Web Services is a subsidiary of Amazon.com that provides on-demand cloud computing platforms to individuals, companies, and governments, on a paid subscription basis.
What is Cloud Computing?
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing.
Amazon web services provide a very rich ecosystem of cloud-based services. AWS Batch is one of them. In this post, we will discuss what is AWS Batch and it’s components.
As the name suggests, AWS Batch allows the user to run their workloads on Amazon Web Services cloud in batches. Developers all across the globe use batch computing to get their job done. The practice of batch computing enables practitioners to efficiently access a large amount of computing capability. One of the well-known facts about cloud computing services these days is that they are very user-friendly in terms of ease of use and efficiency. AWS batch is no different, it allows the user to enjoy the services without worrying about configuring and managing the necessary infrastructure. The AWS Batch service can quickly allocate the required resources when new jobs are submitted. The allocation of resource is done in such a manner that, the overall compute cost is minimized. The whole process is managed in such a way that, there is no effect on the delivery time of the results.
On AWS Batch platform, the user can run their batch computing based workload of any size. The AWS Batch is very flexible in its operation. It automatically allocates the required resources based on the scale of the job. All these allocations of resources are done in an optimized way. The user does not require to have the batch computing software and tools to be installed on their system. This enables the user to freely focus on solving the problem at hand and analyzing the results.
Components of AWS Batch –
The four important components of AWS Batch are :
- Job Definitions
- Job Queues
- Compute Environment
Let’s discuss each of these components in detail.
Jobs : A job is a unit of executable that is submitted to the AWS Batch. It is similar to a Linux executable or a shell script. Just like other jobs, a job in AWS Batch has a name and it runs in your compute environment as a containerized application on an Amazon EC2 instance. An AWS Batch job can also refer to other running jobs and identify them by their name or ID. One job can be dependent on the successful completion of the other job.
Job Definitions : Job definitions contain the specifics of the given job at hand. It indicates how the jobs are to be run. Every job requires some resource to finish execution, and the job definition is responsible for keeping track of the resources. The user can specify the memory as well as the processing requirements of the given job. The job definition can also handle various aspects of the AWS Batch job for persistent storage. Some of those aspects are container properties, environment variables and the mount points.
Job Queues : Job Queues, the name says it all. If you are familiar with job handling in operating system, then you will find it quite easy to understand. Just like an Operating System job, an AWS Batch job, is also submitted to a particular queue before it can be executed. A scheduling algorithm runs and schedules the given job onto a compute environment for execution. While submitting the ASW Batch jobs, you can also assign priorities to it. We can assign priorities to job queue as well. A time sensitive job can be assigned to a high priority queue and a job can be assigned to a low priority queue when that particular job can execute anytime when the resources are cheaper.
Compute Environment: It consists of both the managed as well as unmanaged resources that are required to run an AWS Batch job. As the name suggests, the managed compute environments allow the user to specify the desired type of the instance. AWS also provides the flexibility to a user to manage their own compute environment. In which case, the user is responsible for managing the instances in the Amazon ECS Cluster.