Developing Effective Prompts for Applications with Large Language Models

Man

Foreign applications with large language models

Applications with large language models

I do not think I would ever come to the prompt that I ended up using in the final application on my first attempt. And this is not what matters as long as you have a good process to iteratively make your prompt better, then you will be able to come up with something that works well for the task you want to achieve.

Surprise, I think we are prompting deeply working the first time is maybe a little bit higher, but as is the same, it does not matter if the first prompt works. One that is a monster process. I get into problems that work, so let’s jump into the codes and let me show you some Frameworks that have something called iteratively develop a problem.

Right, so if you have taken the machine learning class with me before, you may have seen we use the diagram saying that with machine learning development, you often have an idea and then implement it. So write the code, get the data, train your model, and that gives you an experimental result. And you can then look at that upwards, maybe do error analysis, figure out where it’s working or not working, and then maybe even change your idea of exactly what you probably want to solve or how to approach it, and then change the implementation and from another experiment and so on and iterate over and over to get to an effective machine learning model. If you’re not familiar with machine learning and haven’t seen this diagram before, don’t worry about this is not that important aggressive this presentation. But when you are writing prompts to develop an application using an LLM, the process can be quite similar where you have an idea for what you want to do, the task you want to complete, and you can then take a fresh attempt at writing a prompt that hopefully is clearance specific, maybe if appropriate gives the system time to think, and then you can run it and see what result you get. And if it doesn’t work well enough the first time, then the iterative process of you know why the instructions, for example, were not clear enough or why do they give the algorithm to think, allows you to find the idea, refine the problems, and so on and around. Once you end up with a problem for your application, this too is why I personally have not paid as much attention to the internet articles that say the perfect prompts because I think there probably isn’t a perfect problem for everything under the sun. It’s more important to hear the process for developing a good prompt for your specific application.

So let’s look at an example together in code. I have here the starter code that you saw in the previous videos here. Um, we get the OpenAI API key, and this is the same output function that you saw as. And I’m going to use as the running example in this video the task of summarizing a fact sheet for a chair. So let me just paste that in Europe, and um, feel free to pause the video and read this more carefully in The Notebook on the network you want, but here’s a fact sheet for a chair with a description saying it’s part of the beautiful value of mid-century inspired and so on top of the construction has the dimensions options for the chair, materials, and so on. How would you mind? So let’s say you want to take this fact sheet and help a marketing team write a description for an online retail website. Let me just quickly free and then um, we’ll come up with our prompt as follows, and I’ll just paste this in. So my prompt here says your task is to help a marketing team create the description for a retail website and product based on technology. Write about the description and so on. So this is my first attempt to explain the task to the large language model. So it should enter, and this takes a few seconds to run, and we get this result. It looks like it’s done a nice job writing that description introducing yourself in this entrance by office chair of representation and so on. But when they look at this like, oh boy, this is really long. It’s done a nice job doing exactly what I asked it to, which is not from the technical fact sheet and write a product description. But when I look at this, they go, this is kind of long. Maybe we want it to be a little bit shorter. So I have had an idea. I wrote a prompt, got a result. I’m not that happy with it because it’s too long. So I will then clarify my prompt and say, use at most 50 words to try to give better guidance on the desired length of this. And let’s run the game. Okay, this is like a much nicer short description of the product. As you see on this Century inspired office chair and so on. Five for you just will establish and practical. Not bad. Um, and let me double-check the length. So I’m going to take the response, split it quite aware of spaces, and then print the length. So it’s 52 words, actually not bad. Um, large language models are okay but not that great at following instructions about a very precise workout. But this is actually not bad. Sometimes it’ll print out something 60 or 65 and so on words, but it’s kind of within reason. Some of the things you could try to do would be um to say use that knows the three sentences. You run that okay, but these are different ways to tell the large language model what’s the length of the output that you want. So this is a pretty good job. Um, and then I’ve also seen people sometimes do things like uh use like almost 280 characters. Large language models because of the way they interrupt in text using something called the tokenizer, which I won’t talk about, but um, they tend to be so so accounting characters that we do the 81 characters. It’s actually surprisingly because usually a large language model is designated quite switch it back to user is 50 words. And there’s that result that we had just now. As we continue to refine this text from our websites, we might decide that point this website isn’t selling direct to consumers. It is actually intended to sell furniture to furniture retailers that would be more interested in the technical details of the chair and the materials of the chair. In that case, you take this prompt and say, I want to modify this prompt to get it to be more precise about the technical details. So let me keep on modifying this prompt. I’m going to say this description is intended for furniture retailers, so it should be technical. Products are from well, let’s find that. And let’s see what that says. You know, go to aluminum-based and pneumatic chair, high-quality materials. So by changing the prompt, you can get it to focus more on specific characteristics you wanted to. And um, when I look at this, I might decide if you enter the description, I also wanted to include the product ID. So the two offerings of this chair, Sophie c110 SOC 100. So maybe I can look further and improve this prompt and get it to give me the product ID. So I can add this instruction, help us run it, see what happens. And so it says which these have the section is file office chair, some colors tell us about plastic coating aluminum base, practical, some options. Talk about the two product IDs. So this looks pretty good. And what you’ve just seen is a short example of the appearance of prompt development as a process. Try something, see how it does not yet fulfill exactly what you want, and then think about how to clarify your instructions. Well, in some cases, think about how they achieve for set up the Egypt right. So when you’re ready, let’s go on to the next video.

The Power of Text Summarization
Older post

The Power of Text Summarization

Newer post

Improving Model Reasoning and Handling Limitations

Improving Model Reasoning and Handling Limitations