Artificial Intelligence

Building AI Applications with OpenAI API

Building AI Applications with OpenAI API
Dimuthu Wayaman
Dimuthu Wayaman
December 28, 2025
18 min read
OpenAIGPTAPIAIPythonChatGPT

Building AI Applications with OpenAI API

OpenAI's API provides access to powerful language models like GPT-4 that can understand and generate human-like text. This guide shows you how to integrate these capabilities into your applications.

Getting Started

Prerequisites

  1. OpenAI account and API key
  2. Python 3.8+
  3. Basic understanding of REST APIs

Installation

pip install openai python-dotenv

Configuration

import os from openai import OpenAI from dotenv import load_dotenv load_dotenv() client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

Basic Chat Completion

def chat_with_gpt(prompt: str, model: str = "gpt-4") -> str: """Send a message to GPT and get a response.""" response = client.chat.completions.create( model=model, messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ], max_tokens=1000, temperature=0.7 ) return response.choices[0].message.content # Usage response = chat_with_gpt("Explain quantum computing in simple terms") print(response)

Building a Chatbot

class ChatBot: def __init__(self, system_prompt: str = "You are a helpful assistant."): self.client = OpenAI() self.conversation_history = [ {"role": "system", "content": system_prompt} ] def chat(self, user_message: str) -> str: """Send message and maintain conversation context.""" self.conversation_history.append({ "role": "user", "content": user_message }) response = self.client.chat.completions.create( model="gpt-4", messages=self.conversation_history, max_tokens=1000, temperature=0.7 ) assistant_message = response.choices[0].message.content self.conversation_history.append({ "role": "assistant", "content": assistant_message }) return assistant_message def reset(self): """Clear conversation history.""" self.conversation_history = self.conversation_history[:1] # Usage bot = ChatBot("You are a Python programming expert.") print(bot.chat("How do I read a CSV file?")) print(bot.chat("What if the file is very large?")) # Maintains context

Function Calling

import json def get_current_weather(location: str, unit: str = "celsius") -> dict: """Simulated weather API call.""" return { "location": location, "temperature": 22, "unit": unit, "condition": "sunny" } tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and country, e.g., London, UK" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ] def chat_with_functions(user_message: str) -> str: response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": user_message}], tools=tools, tool_choice="auto" ) message = response.choices[0].message if message.tool_calls: tool_call = message.tool_calls[0] function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments) if function_name == "get_current_weather": result = get_current_weather(**function_args) # Send function result back to GPT second_response = client.chat.completions.create( model="gpt-4", messages=[ {"role": "user", "content": user_message}, message, { "role": "tool", "tool_call_id": tool_call.id, "content": json.dumps(result) } ] ) return second_response.choices[0].message.content return message.content # Usage print(chat_with_functions("What's the weather like in Tokyo?"))

Image Generation with DALL-E

def generate_image(prompt: str, size: str = "1024x1024") -> str: """Generate an image from a text prompt.""" response = client.images.generate( model="dall-e-3", prompt=prompt, size=size, quality="standard", n=1 ) return response.data[0].url # Usage image_url = generate_image("A futuristic city with flying cars at sunset") print(image_url)

Text Embeddings

def get_embedding(text: str) -> list: """Get embedding vector for text.""" response = client.embeddings.create( model="text-embedding-3-small", input=text ) return response.data[0].embedding def cosine_similarity(vec1: list, vec2: list) -> float: """Calculate cosine similarity between two vectors.""" import numpy as np return np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) # Semantic search example documents = [ "Python is a programming language", "Machine learning uses algorithms", "Cats are popular pets" ] query = "AI and data science" query_embedding = get_embedding(query) # Find most similar document similarities = [] for doc in documents: doc_embedding = get_embedding(doc) similarity = cosine_similarity(query_embedding, doc_embedding) similarities.append((doc, similarity)) sorted_docs = sorted(similarities, key=lambda x: x[1], reverse=True) print("Most relevant:", sorted_docs[0][0])

Building a Q&A System with RAG

class SimpleRAG: def __init__(self): self.client = OpenAI() self.documents = [] self.embeddings = [] def add_document(self, text: str): """Add document to knowledge base.""" embedding = self.get_embedding(text) self.documents.append(text) self.embeddings.append(embedding) def get_embedding(self, text: str) -> list: response = self.client.embeddings.create( model="text-embedding-3-small", input=text ) return response.data[0].embedding def find_relevant_docs(self, query: str, top_k: int = 3) -> list: """Find most relevant documents for a query.""" import numpy as np query_embedding = self.get_embedding(query) similarities = [] for i, emb in enumerate(self.embeddings): sim = np.dot(query_embedding, emb) / (np.linalg.norm(query_embedding) * np.linalg.norm(emb)) similarities.append((self.documents[i], sim)) sorted_docs = sorted(similarities, key=lambda x: x[1], reverse=True) return [doc for doc, _ in sorted_docs[:top_k]] def answer_question(self, question: str) -> str: """Answer question using relevant documents.""" relevant_docs = self.find_relevant_docs(question) context = "\n\n".join(relevant_docs) response = self.client.chat.completions.create( model="gpt-4", messages=[ { "role": "system", "content": "Answer questions based on the provided context. If the answer isn't in the context, say so." }, { "role": "user", "content": "Context:\n{}\n\nQuestion: {}".format(context, question) } ] ) return response.choices[0].message.content # Usage rag = SimpleRAG() rag.add_document("Python was created by Guido van Rossum in 1991.") rag.add_document("JavaScript is the language of the web.") rag.add_document("OpenAI was founded in 2015 in San Francisco.") answer = rag.answer_question("When was Python created?") print(answer)

Streaming Responses

def stream_response(prompt: str): """Stream response tokens as they're generated.""" stream = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}], stream=True ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print() # New line at end # Usage stream_response("Write a haiku about programming")

Best Practices

  1. Prompt Engineering: Write clear, specific prompts
  2. Temperature Control: Lower for factual (0.1), higher for creative (0.9)
  3. Token Management: Monitor usage, set max_tokens appropriately
  4. Error Handling: Implement retries with exponential backoff
  5. Rate Limiting: Respect API limits, implement queuing
  6. Cost Optimization: Use cheaper models for simple tasks
  7. Security: Never expose API keys in client-side code

Conclusion

OpenAI's API opens up endless possibilities for AI-powered applications. From chatbots to image generation to semantic search, these tools can add intelligence to any application. Start experimenting and build something amazing!

Dimuthu Wayaman

About Dimuthu Wayaman

Mobile Application Developer and UI Designer specializing in Flutter development. Passionate about creating beautiful, functional mobile applications and sharing knowledge with the developer community.