Optimizing ChatGPT API: Guidelines for Prompt Engineering With Python

Hello Learners…

Welcome to the blog…

Topic: Optimizing ChatGPT API: Guidelines for Prompt Engineering With Python

Table Of Contents

  • Introduction
  • Optimizing ChatGPT API: Guidelines for Prompt Engineering With Python
  • How We Can Call OpenaAI Model?
  • How We Can Use Prompt With OpeanAI Models?
  • Summary
  • References

Introduction

In this post, we discuss how we can Optimize ChatGPT API: Guidelines for Prompt Engineering With Python. Optimizing ChatGPT API: Guidelines for Prompt Engineering With Python

When working with The OpenAI API at that time we have to provide some additional information on prompt based on what we get the response.

Optimizing ChatGPT API: Guidelines for Prompt Engineering With Python

Requirements:
  • Basic Python Programming Language
  • Paid OpeanAI API Key

First, we required openaai’s API key which is paid, we used here paid API key, If you want to buy then go to the below link and you can buy it.

For that, you have first to create your account on this site.

Note: It is advised that there is no need to buy it for personal use, if you have any further plans then you can buy it.

We are ready to implement OpenAI’s API and try some OpenAI Models.

To use OpenAI API first we have to install the openai python library, also we have to install the python-dotenv library for managing the API keys.

pip install openai
pip install python-dotenv

After installation of both libraries, we have to create a .env file to store the API key, OpenAI recommends it.

And paste here your OpenAI API key.

How We Can Call OpenaAI Model?

After that, we create an app.py file and paste the code into that file.

import openai
import os 
from dotenv import load_dotenv,find_dotenv


__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]

def get_completion(prompt,model="gpt-3.5-turbo"):
    messages=[{"role":"user","content":prompt}]
    response=openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message['content']

print(get_completion("hi"))

When we run the app.py file we can see the below output.

#Output

Hello! How can I assist you today?

The output of “hi” is “Hello! How can I assist you today?” generated by the OpeanAI gpt-3.5-turbo model.

More about the gpt-3.5-turbo model

Now let’s take another example,

How We Can Use Prompt With OpeanAI Models?

There are some methods to specify or write clear and specific instructions.

Write a text-summarize prompt and pass it to the model.

Here, We want to summarize the paragraph or some text then also we can use this using prompt engineering. we use triple backticks to make it very clear to the model, the kind of exact text it should summarize.

So, a delimiter can be any kind of clear punctuation that separates specific pieces of the text from the prompt that is given by us. We can use any which we can tell the model that this is a separate section.

We create a summarize_text.py file and paste the code below.


import openai
import os 
from dotenv import load_dotenv,find_dotenv
__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]

def get_completion(prompt,model="gpt-3.5-turbo"):
    messages=[{"role":"user","content":prompt}]
    response=openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message['content']


text=f"""
Prompt engineering is a process of creating prompts that help large language models (LLMs) generate text,
translate languages, write different kinds of creative content, and answer your questions 
in an informative way. Prompts are essentially instructions that tell the LLM what to do. 
They can be as simple as a question or as complex as a set of instructions.

The goal of prompt engineering is to create prompts that are clear, concise, and effective.
The prompts should be easy for the LLM to understand, and they should result in the desired output."""

prompt_example=f"""
Summarize the text delimited by triple backticks 
into single sentence
```{text}```
"""
response=get_completion(prompt=prompt_example)
print(response)

Here we used triple backticks but we can use any symbols which the model can identify that this is a separate text.

Now run the file from our terminal

python3 summarize_text.py

And we get the output below

#Output

Prompt engineering is the process of creating clear and concise instructions, known as prompts, that help large language models generate text, translate languages, write creative content, and answer questions in an informative way.

So this is the way by using we get very easily summarized the text of our paragraph or any text content.

The cost of the above two API Calls (summarize_text.py and app.py) on “gpt-3.5-turboOpenAI models is $0.000405 Approx.

How we can get structured output from OpenAI API?

It is very useful when we want the response to any particular structure like HTML, or JSON.

To do that we create a structured_output.py file and paste the below code into this.

import openai
import os 
from dotenv import load_dotenv,find_dotenv
__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]

def get_completion(prompt,model="gpt-3.5-turbo"):
    messages=[{"role":"user","content":prompt}]
    response=openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message['content']



prompt_example=f"""
Generate a list of two made-up india top population city along with their population.
provide them in JSON format with the following keys:
city_id,city,population.

"""
response=get_completion(prompt=prompt_example)
print(response)

Here in the prompt we clearly define what we want and based on that we get the response.

Now we run the structured_output.py file and we can see the output as below.

#Output

[
  {
    "city_id": 1,
    "city": "Rajnagar",
    "population": 5000000
  },
  {
    "city_id": 2,
    "city": "Suryanagar",
    "population": 4000000
  }
]

Here we can see how we can get structured output by just using prompt engineering.

The cost of the above API Call (structured_output.py) on “gpt-3.5-turboOpenAI models is $0.000194 Approx.

The way of adding conditions in the prompt with the OpenAI model

To understand how we can add the conditions in the prompt we take an example.

First, create a file conditional_prompt.py and paste the below code into this.

import openai
import os 
from dotenv import load_dotenv,find_dotenv
__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]

def get_completion(prompt,model="gpt-3.5-turbo"):
    messages=[{"role":"user","content":prompt}]
    response=openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message['content']



text=f"""
Learning AI is easy! First go to the site www.galaxyofai.com,read some articles related to AI,
try to understand it, try to implements it for your projects,
And that's it! Now you have idea of how we can implements AI in real life.
Enjoy the learning AI.
"""


prompt_example=f"""
You will be provided with the text delimited by tripple quotes.
If it contain a sequence of instructions ,
re-write those instructions in the following format:

Step 1 - ...
Step 2 - ...
...
Step N - ...

If the text does not contain a sequence of instructions,
then simply write "No steps provided"
```{text}```
"""
response=get_completion(prompt=prompt_example)
print(response)

Here in the prompt we clearly define what we want and based on that we get the response.

Now we run the conditional_prompt.py file and we can see the below output.

#Output

Step 1 - Go to the site www.galaxyofai.com
Step 2 - Read some articles related to AI
Step 3 - Try to understand it
Step 4 - Try to implement it for your projects
Step 5 - Enjoy the learning AI.

So this is the output of the above code and we can see the Steps in our output.

The cost of the above API Call (conditional_prompt.py) on “gpt-3.5-turboOpenAI models is $0.000324 Approx.

We do some changes in the above code and try to understand what type of output we get. we pass text_2 here in which there are no steps.

import openai
import os 
from dotenv import load_dotenv,find_dotenv
__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]

def get_completion(prompt,model="gpt-3.5-turbo"):
    messages=[{"role":"user","content":prompt}]
    response=openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0
    )
    return response.choices[0].message['content']



text=f"""
Learning AI is easy! First go to the site www.galaxyofai.com,read some articles related to AI,
try to understand it, try to implements it for your projects,
And that's it! Now you have idea of how we can implements AI in real life.
Enjoy the learning AI.
"""

text_2=f"""
Prompt engineering is the process of creating clear and concise instructions, known as prompts, 
that help large language models generate text, translate languages, 
write creative content, and answer questions in an informative way.
"""

prompt_example = f"""
You will be provided with the text delimited by tripple quotes.
If it contain a sequence of instructions ,
re-write those instructions in the following format:

Step 1 - ...
Step 2 - ...
...
Step N - ...

If the text does not contain a sequence of instructions,
then simply write "No steps provided"
```{text_2}```
"""
response=get_completion(prompt=prompt_example)
print(response)

The output of the above code snippets is as below

#Output

No steps provided

The cost of the above API Call (conditional_prompt.py) on “gpt-3.5-turboOpenAI models is $0.000185 Approx.

What We Can Do With Prompt Engineering?

Prompt engineering can be used for a variety of tasks, including:

  • Generating text:
    • Prompts can be used to generate text, such as poems, code, scripts, musical pieces, emails, letters, etc.
  • Translating languages:
    • Prompts can be used to translate languages, such as English to Spanish, French to German, etc.
  • Writing different kinds of creative content:
    • Prompts can be used to write different kinds of creative content, such as stories, essays, poems, etc.
  • Answering questions:
    • Prompts can be used to answer questions, such as “What is the capital of India?” or “What is the meaning of life?”

Summary

So this is the way we can use prompt engineering, there are many ways for prompt engineering based on our requirements we can create different types of prompts.

For learning related to this,

References

Leave a Comment