Hey there, AI enthusiasts! 🎉 I’m beyond excited to share my latest adventure into Generative AI and Agentic AI with you! Thanks to Krish Naik’s amazing YouTube playlist (in Hindi!), I’ve been diving headfirst into LangChain, and let me tell you—it’s a game-changer! Whether you’re a student, fresher, or just curious about AI, this blog is your ticket to building chatbots, automating tasks, and creating apps that feel like magic. All the code is on my GitHub repo, so you can jump in and start coding. Ready? Let’s make AI fun and accessible! 🚀
Why You Need to Learn Generative AI and Agentic AI
Picture this: You’re in an interview, and the recruiter asks, “What do you know about Agentic AI?” 😬 Don’t sweat it! These fields are hot right now, and they’re not just for PhDs. Generative AI lets you create content like text, code, or even summaries using Large Language Models (LLMs). Agentic AI takes it up a notch by automating complex workflows—like turning a YouTube video into a blog or streamlining a Software Development Life Cycle (SDLC).
Here’s why you should care:
- Huge Demand: Companies are asking about these skills in interviews.
- Endless Uses: From startups to big tech, AI is transforming every industry.
- Automation Magic: Agentic AI can save hours by handling repetitive tasks.
Big players like OpenAI, Google Gemini, Meta’s Llama, and DeepSeek are powering LLMs, while frameworks like LangChain and LangGraph make it easy to build apps without getting lost in the techy details. Let’s explore how!
LangChain: Your AI Superpower
I was blown away when I discovered LangChain—an open-source framework that makes building AI apps a breeze. Here’s why it’s a must-learn:
- Simplifies Everything: From coding to deployment, LangChain handles it all.
- Works with Any Model: Use OpenAI, Gemini, Llama, or free models like Groq with one codebase.
- Connects to Tools: Integrate with vector databases, APIs (like Tavily for news), or parse data from PDFs and websites.
- Builds Smart Agents: Create AI that automates tasks like a pro.
- Full Ecosystem: Includes LangGraph (for multi-agent systems), LangSmith (for debugging), and LangGraph Studio (for visual debugging).
With LangChain, you can build:
- Chatbots: Human-like conversational AI.
- Generative AI Apps: Create text, code, or creative content.
- Agentic AI Apps: Automate complex tasks like video-to-blog conversion.
- RAG (Retrieval Augmented Generation): Fetch external data for smarter responses.
Fun Fact: RAG is like giving your AI a library card—it grabs relevant info before answering, making it super accurate. More on that later!
Setting Up Your AI Playground
No job? No problem! All you need is Python and a spark of curiosity. Here’s how to set up like a pro:
Prerequisites
- Python is King: Brush up on Python basics—it’s your foundation. Don’t worry about deep Machine Learning knowledge yet; you can start building right away!
- Tools: We’ll use VS Code for coding and Jupyter Notebooks for experimenting. Keep your code modular for clean, reusable projects.
UV: The Fastest Way to Manage Packages
Forget slow pip
! Meet UV, a Rust-powered package manager that’s 10–100x faster. It replaces tools like Poetry and virtualenv. Try these commands:
uv init
: Sets up a new project withpyproject.toml
andmain.py
.uv venv
: Creates a virtual environment (e.g.,uv venv --python 3.11
).source venv/Scripts/activate
(Windows) orsource venv/bin/activate
(Mac/Linux): Activates your environment.uv add langchain
: Installs packages.uv add -r requirements.txt
: Installs from a requirements file.
Secure API Key Management
APIs are the lifeblood of AI apps, but you’ve got to keep those keys safe! Store them in a .env
file and load them with the python-dotenv
library. Here’s a quick snippet:
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("GROQ_API_KEY")
Get your API keys from:
- OpenAI: platform.openai.com
- Google Gemini: ai.google.com/gemini
- Groq: console.groq.com/docs
Never share your API keys! Keep them locked away in your .env
file.
Building Your First LangChain Application
Now, let’s get our hands dirty with some coding! We’ll start with a simple LLM call and work our way up to a conversational chatbot with memory.
Making Your First LLM Call
Here’s how to interact with an LLM using LangChain:
from langchain_core.chat_models import InitChatModel
from langchain_core.messages import SystemMessage, HumanMessage
# Initialize the model (e.g., Groq)
model = InitChatModel(model="llama3-8b", api_key=os.getenv("GROQ_API_KEY"))
# Create messages
messages = [
SystemMessage(content="You are a helpful AI assistant."),
HumanMessage(content="What's the capital of France?")
]
# Get the response
response = model.invoke(messages)
print(response.content) # Output: The capital of France is Paris.
Want real-time responses? Use streaming:
for chunk in model.stream(messages):
print(chunk.content, end="", flush=True)
Dynamic Prompt Templates
Prompts can get complex, but ChatPromptTemplate
makes them a breeze. Here’s an example for a translation app:
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a professional translator. Translate from {source_language} to {target_language}, maintaining tone and style."),
("human", "{text}")
])
prompt = template.invoke({"source_language": "English", "target_language": "Spanish", "text": "Hello, world!"})
response = model.invoke(prompt)
print(response.content) # Hola, mundo!
Building Chains
Chains are where LangChain shines, letting you link prompts, LLMs, and output parsers. Here’s a fun example of a story generator and analyzer:
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda
# Story generation chain
story_template = ChatPromptTemplate.from_messages([
("system", "You are a creative storyteller. Write a short story based on {theme}, {character}, and {setting}."),
("human", "Generate a story.")
])
story_chain = story_template | model | StrOutputParser()
# Story analysis chain
analysis_template = ChatPromptTemplate.from_messages([
("system", "You are a literary critic. Analyze the {story} for tone and style."),
("human", "Analyze this story.")
])
analysis_chain = analysis_template | model | StrOutputParser()
# Combine chains
full_chain = story_chain | RunnableLambda(lambda x: analysis_template.invoke({"story": x})) | model | StrOutputParser()
# Run the chain
result = full_chain.invoke({"theme": "adventure", "character": "pirate", "setting": "Caribbean"})
print(result)
This chain generates a pirate adventure story and then analyzes its tone—how cool is that?
Building a Conversational Q&A Assistant with Memory
Let’s level up and build a chatbot with a sleek Streamlit
interface that remembers your conversation. Here’s the breakdown:
Setting Up Streamlit
Install Streamlit with uv add streamlit
and create a basic UI:
import streamlit as st
st.set_page_config(page_title="AI Chatbot")
st.title("Your Friendly AI Assistant")
st.markdown("Ask me anything, and I'll respond with a smile!")
with st.sidebar:
api_key = st.text_input("Groq API Key", type="password")
model_name = st.selectbox("Model", ["llama3-8b", "gemma2-9b"])
if st.button("Clear Chat"):
st.session_state.messages = []
Adding Memory with Session State
Use st.session_state
to store chat history:
if "messages" not in st.session_state:
st.session_state.messages = []
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
Caching the LLM Chain
Cache your chain for speed:
@st.cache_resource
def get_chain(api_key, model_name):
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI assistant."),
("human", "{input}")
])
llm = ChatGroq(model=model_name, api_key=api_key, streaming=True)
return prompt | llm | StrOutputParser()
Handling User Input
Capture user questions and stream responses:
if question := st.chat_input("Ask me anything"):
st.session_state.messages.append({"role": "user", "content": question})
with st.chat_message("user"):
st.write(question)
chain = get_chain(api_key, model_name)
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for chunk in chain.stream({"input": question}):
full_response += chunk
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})
Run it with streamlit run your_file.py
, and voilà—a chatbot that remembers your conversation!
Why Agentic AI Steals the Show
While Generative AI is awesome for creating content, Agentic AI is where the real magic happens. These apps can automate entire workflows, like:
- Converting a YouTube video into a blog post.
- Managing tasks in an SDLC.
- Orchestrating multiple AI agents to tackle complex problems.
With LangGraph, you can build multi-agent systems that collaborate, making your apps smarter and more powerful. Imagine an AI team working together to solve your problems—how exciting is that?
What’s Next?
This is just the beginning! I’m diving deeper into LangGraph, Crew AI, and no-code tools like Langflow. I’ll also explore RAG, debugging, and LLM evaluation techniques. All the code from this post (and more!) will be on my GitHub repo—check it out to follow along and build your own projects.
So, what are you waiting for? Grab your laptop, set up your environment, and start building AI apps that’ll make you feel like a tech superhero. Got questions or cool ideas? Drop them in the comments, and let’s keep the AI party going!
Happy coding, and see you in the next post!