Prompt Engineering for Inference
Last Updated :
27 Jun, 2023
You must have faced such questions in your exam when you are supposed to answer some questions based on the text or passage provided. This is also known as the process of inferring relevant information from a large piece of text. ChatGPT model is also efficient in performing such tasks we just need to provide clear instructions to the model.
Import the Openai package and assign the Openai API key
Python3
import openai
import os
openai.api_key = "<OpenAI API Key>"
|
Let’s check How it works
Python3
review =
PROMPT = f
MASSAGE = [{ "role" : "user" , "content" : PROMPT}]
response = openai.ChatCompletion.create(
model = "gpt-3.5-turbo" ,
messages = MASSAGE,
temperature = 1
)
response
|
Output:
<OpenAIObject chat.completion id=chatcmpl-7T6rWVI2wYx5x0y3qnbd1kUXCjhFY at 0x7f03a10cbd10> JSON: {
"id": "chatcmpl-7T6rWVI2wYx5x0y3qnbd1kUXCjhFY",
"object": "chat.completion",
"created": 1687172246,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 76,
"completion_tokens": 8,
"total_tokens": 84
},
"choices": [
{
"message": {
"role": "assistant",
"content": "The sentiment of the review is positive."
},
"finish_reason": "stop",
"index": 0
}
]
}
Prompt Engineering for Inference
Prompt
PROMPT = f”””What is the sentiment of the following product review,
which is delimited with triple backticks?
Review text: ”'{review}”’
“””
Create a function based on the above prompt and the result
Python3
def get_completion(prompt, model = "gpt-3.5-turbo" ,
temperature = 0 ):
messages = [{ "role" : "user" , "content" : prompt}]
response = openai.ChatCompletion.create(
model = model,
messages = messages,
temperature = temperature
)
return response.choices[ 0 ].message[ "content" ]
|
Let’s say we have the product review that we used earlier. And the developers are not interested in the details they just want to know how many reviews are positive and how many of them are negative.
Review
Sentiment
Python3
prompt = f
response = get_completion(prompt)
print (response)
|
Output:
The sentiment of the review is generally positive, with the reviewer expressing love for the game and its
colorful candies. However, there is also a negative aspect mentioned regarding the presence of horse and
cartoons with dark complexion. Additionally, the reviewer expresses a desire to have the previous music
option back, indicating some dissatisfaction with the recent changes made by the developer.
Here we can observe that the model has detected both the positive and the negative sentiments of the user in the product and that is the actual case but which one is more dominant here?
Python3
movie_review =
prompt = f
response = get_completion(prompt)
print (response)
|
Output:
Negative
So, from the above output, the developers will realize that the user is not satisfied with the new features that have been rolled out.
Python3
prompt = f
response = get_completion(prompt)
print (response)
|
Output:
disappointment, frustration, criticism, skepticism, dissatisfaction
This is also one of the best features of the ChatGPT model is that it can even format the answer or the generated response as per the format required by the user. As we can observe in the above example we have asked for a comma-separated list of emotions and that is exactly what we have received.
Now let’s assume we have a passage here and we are supposed to answer some questions based on this. Let’s
Python3
prompt = f
response = get_completion(prompt)
print (response)
|
Output:
{
"Sentiment": "positive",
"Place": "Kurukshetra",
"Summary": "Review of the Mahabharata epic and the importance of the Battle of Kurukshetra and the teachings of the Bhagavad Gita",
"Name": "Lord Krishna",
"Time": "unknown"
}
More the detail we will provide about our requirements the better will be the results provided by the model and this has been proved by the above prompt that we have used.
Share your thoughts in the comments
Please Login to comment...