Bulk Reading in Cassandra
Last Updated :
17 Jan, 2020
In this article, we are going to describe how we can read in bulk which also helps in improve the performance. Before going to this article please understand the basic Architecture in Cassandra.
We are going to creating data schema for exercise to test the bulk reading in Cassandra.
Let’s have a look.
First, we are going to create table.
Table Schema to be created:
keyspace name - cluster1
Table name - user_data_app
column-name |
Data Types |
id |
uuid |
name |
text |
status |
text |
Now, let’s write the CQL query to create the above given table schema.
create table user_data_app
(
id uuid primary key,
name text,
status text
);
Now, let’s insert data into the table. Given below is the CQL query to insert the rows in table.
Insert into user_data_app(id, name, status)
values(uuid(), 'Ashish', 'ok');
Insert into user_data_app(id, name, status)
values(uuid(), 'amit', 'in processing');
Insert into user_data_app(id, name, status)
values(uuid(), 'Bhagyesh', 'ok');
Insert into user_data_app(id, name, status)
values(uuid(), 'Alice', 'in processing');
Insert into user_data_app(id, name, status)
values(uuid(), 'Bob', 'ok');
Now, let’s see the results that data successfully added. To verify the results used the following CQL query given below.
SELECT token(id)
FROM user_data_app;
Output:
Now, we are going to find out the token id of partitioning column by which we can perform the comparison and also we will use to perform bulk read.
SELECT token(id)
FROM user_data_app;
Output:
Now, let’s see the below CQL query which we will use for bulk reading.
SELECT token(id), id, name, status
FROM user_data_app
WHERE
token(id) >-4888959478479554900
AND
token(id) <= 1914029651463748596;
Output:
Like Article
Suggest improvement
Share your thoughts in the comments
Please Login to comment...