Skip to main content

How-to guides

Here you’ll find answers to β€œHow do I….?” types of questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task. For conceptual explanations see the Conceptual guide. For end-to-end walkthroughs see Tutorials. For comprehensive descriptions of every class and function see the API Reference.

Installation​

Key features​

This highlights functionality that is core to using LangChain.

LangChain Expression Language (LCEL)​

LangChain Expression Language is a way to create arbitrary custom chains. It is built on the Runnable protocol.

LCEL cheatsheet: For a quick overview of how to use the main LCEL primitives.

Components​

These are the core building blocks you can use when building applications.

Prompt templates​

Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.

Example selectors​

Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.

Chat models​

Chat Models are newer forms of language models that take messages in and output a message.

LLMs​

What LangChain calls LLMs are older forms of language models that take a string in and output a string.

Output parsers​

Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.

Document loaders​

Document Loaders are responsible for loading documents from a variety of sources.

Text splitters​

Text Splitters take a document and split into chunks that can be used for retrieval.

Embedding models​

Embedding Models take a piece of text and create a numerical representation of it.

Vector stores​

Vector stores are databases that can efficiently store and retrieve embeddings.

Retrievers​

Retrievers are responsible for taking a query and returning relevant documents.

Indexing​

Indexing is the process of keeping your vectorstore in-sync with the underlying data source.

Tools​

LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).

Multimodal​

Agents​

note

For in depth how-to guides for agents, please check out LangGraph documentation.

Callbacks​

Custom​

All of LangChain components can easily be extended to support your own versions.

Use cases​

These guides cover use-case specific details.

Q&A with RAG​

Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.

Extraction​

Extraction is when you use LLMs to extract structured information from unstructured text.

Chatbots​

Chatbots involve using an LLM to have a conversation.

Query analysis​

Query Analysis is the task of using an LLM to generate a query to send to a retriever.

Q&A over SQL + CSV​

You can use LLMs to do question answering over tabular data.

Q&A over graph databases​

You can use an LLM to do question answering over graph databases.


Was this page helpful?


You can leave detailed feedback on GitHub.