Back
Blog

Don’t Make Your Users Become Prompt Engineers: Build Web Apps Around LLMs

Discover how to elevate your AI app with streaming technology. Learn practical tips for seamless idea generation and improve your user experience.

Jan 1, 1970

By Katy G

Share:
  • linkedin
  • facebook
  • instagram
  • twitter

There’s a ton of hubbub around LLMs recently, with multitudes of people showing up and showing how to build custom domain specific LLMs (including us). This is super cool, informative, and powerful. However, what I haven’t seen a lot of talk about is how to abstract the prompting away from the user into a web application that makes LLM’S more digestible for the average user. Let’s explore this crucial aspect that often gets overlooked—how to make LLMs more user-friendly.

What is so Hard about Prompting?

The human experience is diverse, words are hard, and semantics can be challenging. Have you ever written an email and then asked for feedback before sending it? Because I know I have, and I received some really strange interpretations, suggestions, and refinements as a result on what I thought to be a fairly straightforward email. Now, that could mean I’m weird and I write weird, but in reality I think it means that most people can read the same thing and come away with different meanings. There’s a reason we normalize text before using it in Natural Language Processing tasks.

The Problem: Prompt Engineering Overload

We are asking every person who sits down with LLMs to become a prompt engineer. You will find countless blogs out there about how to write prompts, how to best use chatGPT, how to structure your questions, etc... This is hard. It can take folks several iterations to get the result that is desired, and is a common reason I’ve seen folks give up on LLMS. It answered wrong. Well, yes, sometimes it does that, but also maybe the way you’re asking the question is not lending itself to the “desired” answer. Prompting, in our experience, has been one of the most difficult areas in developing our custom domain specific chatbot. Altering the prompt can result in a significantly different response from the LLM and can impact an LLM’s “accuracy”.

The Solution: Abstracting Prompts with Web Applications

Abstract away the prompting from your users to really harness the power of LLMS in the form of web applications. What does this mean? Well, it means your users should not be writing the prompts, the AI development team should be. A web application is a perfect medium in which to design a user flow that allows interaction with LLMs but does not force the user to be knowledgeable in prompting. What does this look like?

The example:

Let’s say your LLM product is a very small custom domain specific LLM that scrapes recipes from your own personal food blog. This is a really fun way to allow users to ask questions, synthesize, and interact with your large collection of proprietary recipes. You build the app, with a text input that users can ask questions, get recipes, etc… and launch it!

Some time goes by and you noticed that users weren’t using your application so you do a quick heuristic interview with your users and this is the feedback you get:

Interviewee:

“I wanted to get all the most popular recipes that included corn, but it kept giving me popcorn recipes! I became frustrated and decided just to use google to find the corn recipe I wanted.”

Now, as a user, it’s probably obvious the solution would be to refine the input prompt to be something like “give me the most popular corn recipes, do not include any recipes that are about popcorn”.

But, why are we asking our users to do these cumbersome tasks when it takes time, energy, and thought that our users don’t expect to bring to the product. They want to search and get only corn recipes, duh. Our product is not efficient, useful, or helpful to the user and will not be used.

Now, let’s say we rebuilt the product with an interactive list of ingredients, spices, or cuisine types that a user could click on and return only recipes using that criteria? We would still harness the power of LLMs to return the correct recipe, answer, etc.. but we are abstracting prompts behind selectable components that we’ve tested, iterated on, and have found to return the best answers for.

Result:

A happy user. Simplifying interactions with LLMs is a game-changer. The proposed web application approach transforms the user experience, making LLMs more approachable. Let's build technology that empowers users without overwhelming them, paving the way for a new era of seamless interaction with Large Language Models.

Back to Explore Focused Lab
/Contact us

Let’s build better software together