Building LLM Applications With Prompt Engineering
With the incredible capabilities of large language models (LLMs), enterprises are eager to integrate them into their products and internal applications for a wide variety of use cases, including (but not limited to) text generation, large-scale document analysis, and chatbot assistants. The fastest way to begin leveraging LLMs for diverse tasks is by using modern prompt engineering techniques. These techniques are also foundational for more advanced LLM-based methods such as Retrieval-Augmented Generation (RAG) and Parameter-Efficient Fine-Tuning (PEFT). In this workshop, learners will work with an NVIDIA language model NIM, powered by the open-source Llama-3.1 large language model, alongside the popular LangChain library. The workshop will provide a foundational skill set for building a range of LLM-based applications using prompt engineering.
- Understand how to apply iterative prompt engineering best practices to create LLM-based applications for various language-related tasks.
- Be proficient in using LangChain to organize and compose LLM workflows.
- Write application code to harness LLMs for generative tasks, document analysis, chatbot applications, and more.
Intermediate experience using Python and an understanding of LLM fundamentals
Overview of objectives and course structure
Fundamentals of prompt design and use
Building and managing workflows with LCEL
- Techniques for multi-turn prompts
- Designing structured outputs for consistency
Integrating tools and building agent-based systems
Knowledge check and recap of key concepts