Python FastAPI With OpenAI’s Chat Models

Hello Learners…

Welcome to the blog…

Topic: Python FastAPI With OpenAI’s Chat Models

Table Of Contents

  • Introduction
  • What is FastAPI?
  • Python FastAPI With OpenAI’s Chat Models
  • Summary
  • References

Introduction

This post discusses how we create Python FastAPI With OpenAI’s Chat Models.

Here, we will explore how to combine the flexibility and efficiency of Python FastAPI with the immense capabilities of OpenAI’s chat models.

What is FastAPI?

FastAPI is a modern and high-performance web framework, that allows developers to quickly build and deploy robust APIs, while OpenAI’s chat models offer powerful conversational AI capabilities, making it a perfect match for developing intelligent chat applications.

Python FastAPI With OpenAI’s Chat Models

Now, We will delve into the step-by-step process of integrating OpenAI’s chat models into a FastAPI application, enabling you to explore the potential of AI-driven conversations.

Before going forward to the FastAPI first, we have to do some basic setup.

Install Required Libraries

First, we have to install the required Python libraries. We can install using the pip python package manager.

pip install fastapi

pip install uvicorn

pip install openai

pip install python-dotenv

Setting Up OpenAI’s API Key

After installing the required libraries, we have to set up OpenAI’s API key, and we are using OpenAi’s paid API key.

Note: No need to buy paid OpenAI key for the personal use cases if you have any further plans then you can buy it.

So first, we create a .env file and put our API key as below.

Create FastAPI of OpenaAI’s Chat Models

After that, we are going to create a FastAPI of OpenAI’s chat models. here we are using OpenAI’s gpt-3.5-turbo model which is the best for the current time. you can try other models. for that follow the below URL.

Now we create a chat_model.py file and paste the code below in the file.

from fastapi import FastAPI
import uvicorn
import openai
import os

from dotenv import load_dotenv,find_dotenv
__ = load_dotenv(find_dotenv()) #read local .env file
openai.api_key=os.environ["OPENAI_API_KEY"]


app = FastAPI()

@app.get("/")
def read_root():
    return {"Hello": "World"}

@app.post("/chat_model")
def read_item(msg_text: str):
    completion = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": msg_text}
    ]
    )
    result=completion.choices[0].message['content']
   

    return {"model_response": result}

if __name__=='__main__':
    uvicorn.run(app)

Now run the file,

python3 chat_model.py
Python FastAPI With OpenAI's Chat Models

Here we get an URL: http://127.0.0.1:8000

Here is the above code we use the POST method so for the make request to this API we can use Postman Or FastAPI’s Swagger UI.

To Open FastAPI’s Swagger UI adds docs to the base URL,

  • http://127.0.0.1:8000/docs

We can see the below interface, and we can see our chat_model function which we create in Python code.

Python FastAPI With OpenAI's Chat Models

Now click on the /chat_model and we can see as below interface.

Python FastAPI With OpenAI's Chat Models

Here we click on the Try it out button. after that, we can enter our text in the msg_text area.

And now click on Execute, and we can see our response,

So this is the response of OpenAI’s gpt-3.5-turbo Chat models.

#Model response


{
  "model_response": "AI, or Artificial Intelligence, refers to the development of computer programs and systems that can perform tasks and solve problems that normally require human intelligence, such as perception, reasoning, learning, decision-making, and natural language processing. AI technologies include machine learning, deep learning, natural language processing, robotics, expert systems, and cognitive computing. The goal of AI is to create systems capable of learning and improving, leading to increased efficiency, accuracy, and productivity in various fields and industries."
}

The above API Call (chat_model.py) Charges $0.000824 Approx. We run this code for the 3-4 times this charge including all the API calls.

It is worth noting that using OpenAI’s Chat Models requires proper management of API tokens and rate limits to ensure secure and cost-effective usage. Additionally, monitoring and filtering user inputs and responses are essential to maintain ethical and responsible AI practices.

Summary

In summary, the integration of Python FastAPI with OpenAI’s Chat Models empowers developers to build sophisticated chat-based applications that leverage the power of AI and natural language processing. This combination enables intelligent, interactive, and dynamic conversations, providing users with personalized and engaging experiences in various domains.

References

Leave a Comment