Most AI projects fail. Yours doesn’t have to.
Reserve your spot today and get a production-ready Agent Blueprint in just 3 weeks
6
spots‍
‍available
Register for Your Agent Blueprint
About
Capabilities
Custom AgentsReliable RAGCustom Software DevelopmentEval Driven DevelopmentObservability
LangChainCase StudiesFocused Lab
Contact us
Back
Tutorials

Chat With Your PDFs PART 2: Frontend - An End-to-End LangChain Tutorial

Continue learning how to build a production-ready AI chatbot with ‪LangChain‬ retrieval to search court documents.

Jan 30, 2024

By
Austin Vance
Share:

In this tutorial, Austin Vance, CEO and co-founder of ‪@austinbv_codes‬, will guide you through building a production-ready AI chatbot with ‪@LangChain‬ that uses retrieval to search court documents. From setting up the back end to deploying the front end on Digital Ocean and Lang serve, you'll have a fully functional chatbot ready to go. Follow along with the code on GitHub and get ready to dive into code using React & Typescript frontend using TailwindCSS, Python and more!

The video did end up getting pretty long so we will deploy the app to ‪@DigitalOcean‬ and to ‪@LangChain‬ in Part 3!

Just to remember what happened so far, in Part 1 you learned:

  • How to create a new app using ‪@LangChain‬'s LangServe
  • Ingestion of PDFs using ‪@unstructuredio‬
  • Chunking of documents via ‪@LangChain‬'s SemanticChunker
  • Embedding chunks using ‪@OpenAI‬'s embeddings API
  • Storing embedded chunks into a PGVector a vector database
  • Build a LCEL Chain for LangServe that uses PGVector as a retriever
  • Use the LangServe playground as a way to test our RAG
  • Stream output including document sources to a future front end.

In Part 2 we will focus on:

  • Creating a front end with Typescript, React, and Tailwind
  • Display sources of information along with the LLM output
  • Stream to the frontend with Server Sent Events

In Part 3 we will focus on:

  • Deploying the Backend application to ‪@DigitalOcean‬ & ‪@LangChain‬'s LangServe hosted platform to compare
  • Add LangSmith Integrations
  • Deploying the frontend to ‪@DigitalOcean‬'s App Platform
  • Use a managed Postgres Database

In Part 4 we will focus on:

  • Adding Memory to the ‪@LangChain‬ Chain with PostgreSQL
  • Add Multiquery to the chain for better breadth of search
  • Add sessions to the Chat History

Github repo

https://github.com/focused-labs/pdf_rag

Chapters

0:00 - Intro
2:30 - Revisit Part 1
4:45 - Inspect the LangServe Output
8:00 - Have the Backend send JSON and Documents Not A String
9:15 - Modify our LCEL Chain for JSON
11:50 - Start Thinking About the Frontend
12:40 - Create React App & Install Dependencies
13:40 - Install & Configure TailwindCSS
15:30 - Start Building the Frontend
30:50 - Start to Handle Input on The Frontend
34:20 - Start Handling Server Communication
39:00 - Dealing with CORS Errors
47:35 - Display User Messages Dynamically
55:28 - Handle Server Returned Messages
57:00 - Handle Server Returned Chunks of Messages
1:00:00 - Display Sources Below AI Messages
1:03:53 - Serve Static Documents from the LangServe Server\
1:06:45 - Coming Next
1:07:00 - Revisit What We Did
1:08:00 - Outro

Your message has been sent!

We’ll be in touch soon. In the mean time check out our case studies.

See all projects
/Contact Us

From concept to table: let’s build your loyalty app

Modernize your legacy with Focused

Get in touch
Focused

433 W Van Buren St Suite 1100-C
Chicago, IL 60607
‍work@focused.io
‍
(708) 303-8088

‍

About
Leadership
Capabilities
Case Studies
Focused Lab
Careers
Contact
© 2026 Focused. All rights reserved.
Privacy Policy
Most AI projects fail. Yours doesn’t have to.
Reserve your spot today and get a production-ready Agent Blueprint in just 3 weeks
6
spots‍
‍available
Register for Your Agent Blueprint