Open In App

Creating AWS DynamoDB Table Using Terraform

I’m going to show how to use Terraform to create a DynamoDB table in AWS. Terraform lets you define infrastructure like databases as code. This makes it easy to version control and share with others. In this article, I’ll walk through the steps to set up a Terraform file and define a DynamoDB table in it. Then I’ll apply the plan to create the real table in AWS. Following along will show you a hands-on example of using Terraform to manage infrastructure as code. The end result will be a DynamoDB table defined in a Terraform config that can be reused and shared.

What Is Terraform?

Terraform is an Infrastructure management tool. It facilitates automatically creating cloud resources through resource definition in the files. In the past, I would log into my cloud provider’s console and click around to set up things like servers or databases. Doing it manually like that can be tedious and error-prone though. The terraform helps in overcoming this issue by automating the manual setups by defining the tasks and configuring the resources.



Features Of Terraform

The following are The the following are the Features Of Terraform that facilitate:



Advantages of Terraform

Instead of logging into web dashboards and clicking around to setup servers, databases, and more for their projects, her team now just writes text files that describe what they need to deploy. Then Terraform reads those files and automatically creates everything for them! The following are the some of the biggest benefits of using terraform:

What Is DynamoDB?

DynamoDB is a fully managed NoSQL database from Amazon Web Services (AWS). As a non-relational database, it allows for high scalability and performance without the complexity of running your own large database. DynamoDB supports document and key-value data models, providing flexibility in data storage and retrieval.

A major benefit of DynamoDB is automatic scaling capabilities. Based on configured throughput, DynamoDB will scale underlying resources up or down to meet demand. This removes overhead of managing infrastructure and ensures applications have needed throughput.

DynamoDB also supports in-memory caching and expiration of items to improve performance. By caching frequently accessed data in memory, read times are faster. Expiring outdated items reduces storage usage and cost. Overall, DynamoDB’s managed and scalable nature make it a great choice for modern applications needing high-performance, low-latency database services.

Benefits Of Using Terraform For DynamoDB Table Creation

For provisioning and managing DynamoDB tables, using Terraform has advantages. Terraform is a tool allowing you to define and deploy cloud resources repeatedly.

For DynamoDB, Terraform provides a straightforward way to define tables, indexes, streams, and settings in code. This approach has benefits like:

In summary, combining DynamoDB’s managed scaling and Terraform’s codified management enables simpler, more scalable NoSQL database usage in the cloud. This can speed up development workflows.

Creating AWS DynamoDB Table Using Terraform

Step 1: Installation For Terraform On System

For installation of Terraform on Linux/Windows refer this – Article.

terraform --version

Step 2: Configuring AWS credentials On Your Command Line

export AWS_ACCESS_KEY_ID="<your access key>"
export AWS_SECRET_ACCESS_KEY="<your secret key>"

( or )

[default]
aws_access_key_id = <your access key>
aws_secret_access_key = <your secret key>

Step 3: Create A Terraform Workspace (Repository)

mkdir dynamodb-terraform
cd dynamodb-terraform

Step 4: Define Amazon DynamoDB Resource In Terraform

Step 5: Providing Terraform Credentials In Terraform

Add the AWS provider configuration – Replace “YOUR_AWS_REGION” and “YOUR_AWS_PROFILE”:

provider "aws" {
  region = "YOUR_AWS_REGION"
  profile = "YOUR_AWS_PROFILE"
}

Defining the DynamoDB table resource is where we get to translate our desired table specifications into actual Terraform configuration that will provision this table on AWS. I want to dive deeper on this key step. Within the resource block, every line and setting customizes exactly how the DynamoDB table will be configured on creation.

For Example

Overall, every single block and argument shapes the end state of our DynamoDB table. This gives us total control to customize our infrastructure to meet our exact application requirements. The ability for infrastructure-as-code to capture these details is what makes Terraform so powerful for cloud provisioning.

provider "aws" {
  region  = "us-east-1"
  version = "~> 2.0"
}

resource "aws_dynamodb_table" "basic-dynamodb-table" {
  name           = "DynamoDB-Terraform"
  billing_mode   = "PROVISIONED"
  read_capacity  = 20
  write_capacity = 20
  hash_key       = "UserId"
  range_key      = "Name"

  attribute {
    name = "UserId"
    type = "S"
  }

  attribute {
    name = "Name"
    type = "S"
  }

  ttl {
    attribute_name = "TimeToExist"
    enabled        = false
  }

  global_secondary_index {
    name               = "UserTitleIndex"
    hash_key           = "UserId"
    range_key          = "Name"
    write_capacity     = 10
    read_capacity      = 10
    projection_type    = "KEYS_ONLY"  # Corrected projection_type
    non_key_attributes = []
  }

  tags = {
    Name        = "dynamodb-table"
    Environment = "Training"
  }
}

Step 6: Execute The Terrform WorkSpace ( terraform init )

terraform init

Step 7: Terraform Plan

terraform plan

Step 8: Terraform Apply

Some Key Points About “apply”

Overall, apply puts Terraform’s full power into action making the AWS resources we defined actually come to life. It realizes our infrastructure-as-code vision safely, repeatably, and transparently thanks to built-in planning and visibility every step of the way.

terraform apply

Step 9: Deleting Terraform Workspace

The destroy command essentially reverses the apply process by telling AWS to delete all the resources it previously created. Thanks to this destroy capability paired with automated applies, Terraform enables very agile, low-risk infrastructure life cycles. We can build exactly what we need when we need it, then take it all down just as easily.

terraform destroy

Managing Tables And Items In Database

1. Putting And Getting Items

resource "aws_dynamodb_table_item" "item" {
table_name = aws_dynamodb_table.mytable.name
hash_key   = aws_dynamodb_table.mytable.hash_key
item = {
"UserId": "user123",
"Name": "John Doe"
}
}

2. Destroying DynamoDB Tables

resource "aws_dynamodb_table" "temptable" {
//config
}
terraform destroy

3. Advanced Workflows

While Terraform provides simple CRUD operations for DynamoDB items, more complex workflows can also be implemented by leveraging Terraform’s extensive functionality.

For example, batch writing items can help speed up inserts and reduce API calls by combining operations. The aws_dynamodb_batch_write_item resource allows defining batched puts and deletes for efficiency.

Things like atomic transactions, indexing/query optimization, conditional chaining of operations, and integration with other services are possible. Terraform enables going far beyond basic create, read, update, delete to implement dynamic cloud-scale applications with DynamoDB.

Managing Indexes And Streams Of DynamoDB

Adding Indexes

resource "aws_dynamodb_table" "table" {
hash_key = "UserId"
index {
name = "GameTitleIndex"
hash_key = "GameTitle"
}
}

Enabling Streams

When I first started with DynamoDB, I just viewed it as a fast NoSQL data store for my applications. But enabling DynamoDB streams opened my eyes to the possibilities of integrating it with other services. Streams provide a time-ordered log of item changes in a table. When enabled on a table, any creates, updates or deletes get captured and sent to a stream.

I can tap into this stream for some powerful workflows:

The stream Integration patterns are endless!

My favorite use case so far is syncing my critical production tables to a different region every hour using streams and Lambda. This gives me peace of mind that my data is durable. Streams turn DynamoDB into a hub that can integrate with many different AWS services. The stream view options let you tune what data to send downstream. I’m excited to enable streams on more tables and build out sophisticated architectures around them. The ease of use in Terraform is just icing on the cake!

DynamoDB streams can track data changes and send to other services. Turn on streams with:

resource "aws_dynamodb_table" "table" {
stream_enabled = true
stream_view_type = "NEW_IMAGE"
}

This captures table changes and sends new item data.

Deploying And Testing Through Terraform

Deploying Infrastructure

One of the key advantages of Terraform is its ability to repeatedly deploy infrastructure in a consistent way. For DynamoDB, the workflow involves:

  1. Initializing Terraform to install providers and modules
  2. Running terraform plan to preview the changes to be made
  3. Applying the changes with terraform apply to create real resources

I like to run plans first to validate everything will work as expected. The apply command provisions AWS resources like DynamoDB tables based on the Terraform configs.

Terraform also makes clean up easy. Running terraform destroy will remove any resources no longer needed. Overall, Terraform deployment gives me confidence my infrastructure is correct.

Validating Functionality Of DynamoDB

Once DynamoDB tables are deployed, I test them thoroughly. Some validation techniques I use:

End-to-End Setup Example

Here is an example walkthrough of a complete setup:

  1. Define Terraform Configs: Define DynamoDB table resources in Terraform – name, schema, indexes, throughput, etc.
  2. Initialize And Plan Deployment: Initialize and plan deployment to verify no errors
  3. Apply Configurations: Apply to create real tables in the AWS account
  4. Test Data Operations: Use AWS CLI to insert and retrieve test data
  5. Query With Terraform Configs: Query using alternate indexes configured in Terraform
  6. Destroy Tables: Destroy tables when done to clean up

This workflow lets me provision fully-featured DynamoDB environments easily.

Conclusion

Well, I’d say this terraforming business has been a success! I laid down my plans like an ethereal gardener, detailing the fertile lands I aimed to cultivate and the bounties I wished to reap. Though my tools were lines of code, with Terraform I tilled the cloud, sowing seeds that would manifest most majestically. Now at harvest time, I behold the fruits of my labors – a DynamoDB constructed to my exact specifications. Sturdy tables now stand where once was empty AWS soil. Lean in close and you may hear the gentle hum of NoSQL databases catalyzing. Our work here is complete…for now at least. The infrastructure alchemy continues!

AWS DynamoDB With Terraform – FAQ’s

What Is DynamoDB?

DynamoDB is a fully managed NoSQL database service provided by Amazon Web Services (AWS). It provides fast performance and seamless scalability for modern applications.

How Does DynamoDB Work?

DynamoDB stores data in tables made of items similar to rows. Each item is composed of attributes like columns. Items are retrieved using primary keys. DynamoDB handles the underlying infrastructure automatically.

What Are The Data Models In DynamoDB?

The key data models are document and key-value. With document, each item is a collection of attributes. With key-value, only the primary key and value are stored.

How Is DynamoDB Different From Traditional Databases?

DynamoDB is a NoSQL non-tabular database. It sacrifices some functionality like joins for extreme speed and scalability. DynamoDB tables can grow or shrink on demand.

How Is Throughput Handled In DynamoDB?

DynamoDB provides on-demand capacity or provisioned capacity. On-demand scales automatically while provisioned reserves read/write units. This simplifies throughput management.

How Do I Get Started With DynamoDB?

Getting started is easy using the AWS console. You can create tables, add data, run queries. Integrating via an SDK provides more control for developers.

What Are Some Best Practices With DynamoDB?

Some best practices are enabling encryption, using sort keys, distributing requests, avoiding “hot” keys, and minimizing scans. Table design is important for performance.


Article Tags :