With the rise of Large Language Models projects have arisen to try to make LLMs easier to use. One of the most prominent of these is LangChain. The question arises, how useful is LangChain for interacting with LLMs.

What is LangChain?
LangChain is a framework designed to make it easier to interact with LLMs like ChatGPT and Llama. As the name suggests, LangChain is designed to allow you to chain together different operations which might be needed to create a useful tool integrating one or more large language models to perform a task.
LangChain consists of tools to handle interacting with a variety of LLMs and more are being added as they arise. It also provides a standardised way to approach prompt engineering. It also provides methods for interacting with APIs, databases and to extend the memory of LLMs. These extensions allow you to use an LLM as an agent capable of leveraging other resources to provide a high quality response to complex or difficult to parse queries.
LangChain consists of a number of modules. The modules cover:
- I/O to the LLM itself including prompt templates and example formatting along with output parsers to enforce schema on returns
- Data connectors to load standard kinds of documents and preprocess them
- A variety of memory types like a conversation buffer, knowledge graphs, vector stores and summarisation
- Chains which allow you to link together the various elements of your solution into a pipeline for simple execution
- Tools which integrate with chains to perform operations where LLMs are weak such as calculation
- Agents which allow you to create more complex interactions between LLMs and other tools or to structure the way the LLM approaches a task
Pros
LangChain provides several useful features:
- A consistent way to call different models
- A consistent pattern for prefixing and suffixing prompts with prompt engineering phrases to improve prompt responses
- Methods for interacting with tools other than LLMs such as search APIs, databases of various kinds and adding memory to interactions
- Support for asynchronous responses
- Provides methods to test parts of your chain to ensure it is behaving as expected and for evaluating responses
- Is being actively updated making it easier to keep abreast of changes and improvements in LLM usage
- There are an increasing number of tutorials and cookbooks for langchain to obtain results fast
Cons
So with these benefits, what could be the disadvantages of using LangChain? Well there seem to be a few:
- Because LangChain has to be able to interact with a wide variety of other libraries, it is often more complex in its classes than any underlying tool
- In cases such as prompt generation LangChain operates rather like Python formatting but in a more complicated fashion
- LangChain is necessarily prescriptive in the way you interact with LLMs creating a tension between flexibility and utility
- LangChain sometimes makes it difficult to execute a custom step as part of a chain without a lot of extra work
- LangChain might constrict project design to patterns which work well in LangChain rather than to an optimal solution
Overall
Overall I am still uncertain of LangChain’s usefulness in all situations. For simple situations a small amount of bespoke code might provide a quicker solution than using LangChain. Equally for complex use cases you may well need some bespoke code either alongside or instead of LangChain since LangChain may be overly prescriptive. LangChain may also prevent certain optimisations you want to implement
It seems to me that LangChain might be most useful in the middle ground for prototyping complicated systems to see whether a concept will work prior to optimisation and customisation to achieve best results. Another value of LangChain is that being open source it provides templates and recipes that you can refer to for handling many common LLM tool implementation issues. Even if you have to write custom tool, LangChain may give a useful starting point. Certainly I intend to continue to experiment with it for now.