转载整理自:https://api.python.langchain.com/en/latest/langchain_api_reference.html
文章目录
- 1、langchain.agents
- Classes
- Functions
 
- 2、langchain.callbacks
- Classes
 
- 3、langchain.chains
- Classes
- Functions
 
- 4、langchain.embeddings
- Classes
 
- 5、langchain.evaluation
- Classes
- Functions
 
- 6、langchain.hub
- Functions
 
- 7、langchain.indexes
- Classes
 
- 8、langchain.memory
- Classes
- Functions
 
- 9、langchain.model_laboratory
- Classes
 
- 10、langchain.output_parsers
- Classes
- Functions
 
- 11、langchain.retrievers
- Classes
- Functions
 
- 12、langchain.runnables
- Classes
 
- 13、langchain.smith
- Classes
- Functions
 
- 14、langchain.storage
- Classes
 
 
1、langchain.agents
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgentOpenAIFunctionsAgentXMLAgentAgent --> <name>Agent  # Examples: ZeroShotAgent, ChatAgentBaseMultiActionAgent  --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes
- agents.agent.Agent
 [Deprecated] Agent that calls the language model and deciding the action.
- agents.agent.AgentExecutor
 Agent that is using tools.
- agents.agent.AgentOutputParser
 Base class for parsing agent output into agent action/finish.
- agents.agent.BaseMultiActionAgent
 Base Multi Action Agent class.
- agents.agent.BaseSingleActionAgent
 Base Single Action Agent class.
- agents.agent.ExceptionTool
 Tool that just returns the query.
- agents.agent.LLMSingleActionAgent
 [Deprecated] Base class for single action agents.
- agents.agent.MultiActionAgentOutputParser
 Base class for parsing agent output into agent actions/finish.
- agents.agent.RunnableAgent
 Agent powered by runnables.
- agents.agent.RunnableMultiActionAgent
 Agent powered by runnables.
- agents.agent_iterator.AgentExecutorIterator(…)
 Iterator for AgentExecutor.
- agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
 Information about a VectorStore.
- agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
 Toolkit for routing between Vector Stores.
- agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
 Toolkit for interacting with a Vector Store.
- agents.agent_types.AgentType(value)
 [Deprecated] An enum for agent types.
- agents.chat.base.ChatAgent
 [Deprecated] Chat Agent.
- agents.chat.output_parser.ChatOutputParser
 Output parser for the chat agent.
- agents.conversational.base.ConversationalAgent
 [Deprecated] An agent that holds a conversation in addition to using tools.
- agents.conversational.output_parser.ConvoOutputParser
 Output parser for the conversational agent.
- agents.conversational_chat.base.ConversationalChatAgent
 [Deprecated] An agent designed to hold a conversation in addition to using tools.
- agents.conversational_chat.output_parser.ConvoOutputParser
 Output parser for the conversational agent.
- agents.mrkl.base.ChainConfig(action_name, …)
 Configuration for chain to use in MRKL system.
- agents.mrkl.base.MRKLChain
 [Deprecated] [Deprecated] Chain that implements the MRKL system.
- agents.mrkl.base.ZeroShotAgent
 [Deprecated] Agent for the MRKL chain.
- agents.mrkl.output_parser.MRKLOutputParser
 MRKL Output parser for the chat agent.
- agents.openai_assistant.base.OpenAIAssistantAction
 AgentAction with info needed to submit custom tool output to existing run.
- agents.openai_assistant.base.OpenAIAssistantFinish
 AgentFinish with run and thread metadata.
- agents.openai_assistant.base.OpenAIAssistantRunnable
 Run an OpenAI Assistant.
- agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory
 Memory used to save agent output AND intermediate steps.
- agents.openai_functions_agent.base.OpenAIFunctionsAgent
 [Deprecated] An Agent driven by OpenAIs function powered API.
- agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
 [Deprecated] An Agent driven by OpenAIs function powered API.
- agents.output_parsers.json.JSONAgentOutputParser
 Parses tool invocations and final answers in JSON format.
- agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
 Parses a message into agent action/finish.
- agents.output_parsers.openai_tools.OpenAIToolsAgentOutputParser
 Parses a message into agent actions/finish.
- agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
 Parses ReAct-style LLM calls that have a single tool input in json format.
- agents.output_parsers.react_single_input.ReActSingleInputOutputParser
 Parses ReAct-style LLM calls that have a single tool input.
- agents.output_parsers.self_ask.SelfAskOutputParser
 Parses self-ask style LLM calls.
- agents.output_parsers.tools.ToolAgentAction
 Override init to support instantiation by position for backward compat.
- agents.output_parsers.tools.ToolsAgentOutputParser
 Parses a message into agent actions/finish.
- agents.output_parsers.xml.XMLAgentOutputParser
 Parses tool invocations and final answers in XML format.
- agents.react.base.DocstoreExplorer(docstore)
 [Deprecated] Class to assist with exploration of a document store.
- agents.react.base.ReActChain
 [Deprecated] [Deprecated] Chain that implements the ReAct paper.
- agents.react.base.ReActDocstoreAgent
 [Deprecated] Agent for the ReAct chain.
- agents.react.base.ReActTextWorldAgent
 [Deprecated] Agent for the ReAct TextWorld chain.
- agents.react.output_parser.ReActOutputParser
 Output parser for the ReAct agent.
- agents.schema.AgentScratchPadChatPromptTemplate
 Chat prompt template for the agent scratchpad.
- agents.self_ask_with_search.base.SelfAskWithSearchAgent
 [Deprecated] Agent for the self-ask-with-search paper.
- agents.self_ask_with_search.base.SelfAskWithSearchChain
 [Deprecated] [Deprecated] Chain that does self-ask with search.
- agents.structured_chat.base.StructuredChatAgent
 [Deprecated] Structured Chat Agent.
- agents.structured_chat.output_parser.StructuredChatOutputParser
 Output parser for the structured chat agent.
- agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
 Output parser with retries for the structured chat agent.
- agents.tools.InvalidTool
 Tool that is run when invalid tool name is encountered by agent.
- agents.xml.base.XMLAgent
 [Deprecated] Agent that uses XML tags.
Functions
- agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(…)
 A convenience method for creating a conversational retrieval agent.
- agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(…)
 Construct a VectorStore agent from an LLM and tools.
- agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(…)
 Construct a VectorStore router agent from an LLM and tools.
- agents.format_scratchpad.log.format_log_to_str(…)
 Construct the scratchpad that lets the agent continue its thought process.
- agents.format_scratchpad.log_to_messages.format_log_to_messages(…)
 Construct the scratchpad that lets the agent continue its thought process.
- agents.format_scratchpad.openai_functions.format_to_openai_function_messages(…)
 Convert (AgentAction, tool output) tuples into FunctionMessages.
- agents.format_scratchpad.openai_functions.format_to_openai_functions(…)
 Convert (AgentAction, tool output) tuples into FunctionMessages.
- agents.format_scratchpad.tools.format_to_tool_messages(…)
 Convert (AgentAction, tool output) tuples into FunctionMessages.
- agents.format_scratchpad.xml.format_xml(…)
 Format the intermediate steps as XML.
- agents.initialize.initialize_agent(tools, llm)
 [Deprecated] Load an agent executor given tools and LLM.
- agents.json_chat.base.create_json_chat_agent(…)
 Create an agent that uses JSON to format its logic, build for Chat Models.
- agents.loading.load_agent(path, **kwargs)
 [Deprecated] Unified method for loading an agent from LangChainHub or local fs.
- agents.loading.load_agent_from_config(config)
 [Deprecated] Load agent from Config Dict.
- agents.openai_functions_agent.base.create_openai_functions_agent(…)
 Create an agent that uses OpenAI function calling.
- agents.openai_tools.base.create_openai_tools_agent(…)
 Create an agent that uses OpenAI tools.
- agents.output_parsers.openai_tools.parse_ai_message_to_openai_tool_action(message)
 Parse an AI message potentially containing tool_calls.
- agents.output_parsers.tools.parse_ai_message_to_tool_action(message)
 Parse an AI message potentially containing tool_calls.
- agents.react.agent.create_react_agent(llm, …)
 Create an agent that uses ReAct prompting.
- agents.self_ask_with_search.base.create_self_ask_with_search_agent(…)
 Create an agent that uses self-ask with search prompting.
- agents.structured_chat.base.create_structured_chat_agent(…)
 Create an agent aimed at supporting tools with multiple inputs.
- agents.tool_calling_agent.base.create_tool_calling_agent(…)
 Create an agent that uses tools.
- agents.utils.validate_tools_single_input(…)
 Validate tools for single input.
- agents.xml.base.create_xml_agent(llm, tools, …)
 Create an agent that uses XML to format its logic.
2、langchain.callbacks
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler  # Example: AimCallbackHandler
Classes
- callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
 Callback handler that returns an async iterator.
- callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
 Callback handler that returns an async iterator.
- callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
 Callback handler for streaming in agents.
- callbacks.tracers.logging.LoggingCallbackHandler(logger)
 Tracer that logs via the input Logger.
3、langchain.chains
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers, other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
- Stateful: add Memory to any Chain to give it state,
- Observable: pass Callbacks to a Chain to execute additional functionality, like logging, outside the main sequence of component calls,
- Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain  # Examples: LLMChain, MapReduceChain, RouterChain
Classes
- chains.api.base.APIChain
 Chain that makes API calls and summarizes the responses to answer a question.
- chains.base.Chain
 Abstract base class for creating structured sequences of calls to components.
- chains.combine_documents.base.AnalyzeDocumentChain
 Chain that splits documents, then analyzes it in pieces.
- chains.combine_documents.base.BaseCombineDocumentsChain
 Base interface for chains combining documents.
- chains.combine_documents.map_reduce.MapReduceDocumentsChain
 Combining documents by mapping a chain over them, then combining results.
- chains.combine_documents.map_rerank.MapRerankDocumentsChain
 Combining documents by mapping a chain over them, then reranking results.
- chains.combine_documents.reduce.AsyncCombineDocsProtocol(…)
 Interface for the combine_docs method.
- chains.combine_documents.reduce.CombineDocsProtocol(…)
 Interface for the combine_docs method.
- chains.combine_documents.reduce.ReduceDocumentsChain
 Combine documents by recursively reducing them.
- chains.combine_documents.refine.RefineDocumentsChain
 Combine documents by doing a first pass and then refining on more documents.
- chains.combine_documents.stuff.StuffDocumentsChain
 Chain that combines documents by stuffing into context.
- chains.constitutional_ai.base.ConstitutionalChain
 Chain for applying constitutional principles.
- chains.constitutional_ai.models.ConstitutionalPrinciple
 Class for a constitutional principle.
- chains.conversation.base.ConversationChain
 Chain to have a conversation and load context from memory.
- chains.conversational_retrieval.base.BaseConversationalRetrievalChain
 Chain for chatting with an index.
- chains.conversational_retrieval.base.ChatVectorDBChain
 Chain for chatting with a vector database.
- chains.conversational_retrieval.base.ConversationalRetrievalChain
 [Deprecated] Chain for having a conversation based on retrieved documents.
- chains.conversational_retrieval.base.InputType
 Input type for ConversationalRetrievalChain.
- chains.elasticsearch_database.base.ElasticsearchDatabaseChain
 Chain for interacting with Elasticsearch Database.
- chains.flare.base.FlareChain
 Chain that combines a retriever, a question generator, and a response generator.
- chains.flare.base.QuestionGeneratorChain
 Chain that generates questions from uncertain spans.
- chains.flare.prompts.FinishedOutputParser
 Output parser that checks if the output is finished.
- chains.hyde.base.HypotheticalDocumentEmbedder
 Generate hypothetical document for query, and then embed that.
- chains.llm.LLMChain
 [Deprecated] Chain to run queries against LLMs.
- chains.llm_checker.base.LLMCheckerChain
 Chain for question-answering with self-verification.
- chains.llm_math.base.LLMMathChain
 Chain that interprets a prompt and executes python code to do math.
- chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
 Chain for question-answering with self-verification.
- chains.mapreduce.MapReduceChain
 Map-reduce chain.
- chains.moderation.OpenAIModerationChain
 Pass input through a moderation endpoint.
- chains.natbot.base.NatBotChain
 Implement an LLM driven browser.
- chains.natbot.crawler.Crawler()
 A crawler for web pages.
- chains.natbot.crawler.ElementInViewPort
 A typed dictionary containing information about elements in the viewport.
- chains.openai_functions.citation_fuzzy_match.FactWithEvidence
 Class representing a single statement.
- chains.openai_functions.citation_fuzzy_match.QuestionAnswer
 A question and its answer as a list of facts each one should have a source.
- chains.openai_functions.openapi.SimpleRequestChain
 Chain for making a simple request to an API endpoint.
- chains.openai_functions.qa_with_structure.AnswerWithSources
 An answer to the question, with sources.
- chains.prompt_selector.BasePromptSelector
 Base class for prompt selectors.
- chains.prompt_selector.ConditionalPromptSelector
 Prompt collection that goes through conditionals.
- chains.qa_generation.base.QAGenerationChain
 Base class for question-answer generation chains.
- chains.qa_with_sources.base.BaseQAWithSourcesChain
 Question answering chain with sources over documents.
- chains.qa_with_sources.base.QAWithSourcesChain
 Question answering with sources over documents.
- chains.qa_with_sources.loading.LoadingCallable(…)
 Interface for loading the combine documents chain.
- chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
 Question-answering with sources over an index.
- chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
 Question-answering with sources over a vector database.
- chains.query_constructor.base.StructuredQueryOutputParser
 Output parser that parses a structured query.
- chains.query_constructor.parser.ISO8601Date
 A date in ISO 8601 format (YYYY-MM-DD).
- chains.query_constructor.schema.AttributeInfo
 Information about a data source attribute.
- chains.question_answering.chain.LoadingCallable(…)
 Interface for loading the combine documents chain.
- chains.retrieval_qa.base.BaseRetrievalQA
 Base class for question-answering chains.
- chains.retrieval_qa.base.RetrievalQA
 [Deprecated] Chain for question-answering against an index.
- chains.retrieval_qa.base.VectorDBQA
 Chain for question-answering against a vector database.
- chains.router.base.MultiRouteChain
 Use a single chain to route an input to one of multiple candidate chains.
- chains.router.base.Route(destination, …)
 Create new instance of Route(destination, next_inputs)
- chains.router.base.RouterChain
 Chain that outputs the name of a destination chain and the inputs to it.
- chains.router.embedding_router.EmbeddingRouterChain
 Chain that uses embeddings to route between options.
- chains.router.llm_router.LLMRouterChain
 A router chain that uses an LLM chain to perform routing.
- chains.router.llm_router.RouterOutputParser
 Parser for output of router chain in the multi-prompt chain.
- chains.router.multi_prompt.MultiPromptChain
 A multi-route chain that uses an LLM router chain to choose amongst prompts.
- chains.router.multi_retrieval_qa.MultiRetrievalQAChain
 A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.
- chains.sequential.SequentialChain
 Chain where the outputs of one chain feed directly into next.
- chains.sequential.SimpleSequentialChain
 Simple chain where the outputs of one step feed directly into next.
- chains.sql_database.query.SQLInput
 Input for a SQL Chain.
- chains.sql_database.query.SQLInputWithTables
 Input for a SQL Chain.
- chains.summarize.chain.LoadingCallable(…)
 Interface for loading the combine documents chain.
- chains.transform.TransformChain
 Chain that transforms the chain output.
Functions
- chains.combine_documents.reduce.acollapse_docs(…)
 Execute a collapse function on a set of documents and merge their metadatas.
- chains.combine_documents.reduce.collapse_docs(…)
 Execute a collapse function on a set of documents and merge their metadatas.
- chains.combine_documents.reduce.split_list_of_docs(…)
 Split Documents into subsets that each meet a cumulative length constraint.
- chains.combine_documents.stuff.create_stuff_documents_chain(…)
 Create a chain for passing a list of Documents to a model.
- chains.example_generator.generate_example(…)
 Return another example given a list of examples for a prompt.
- chains.history_aware_retriever.create_history_aware_retriever(…)
 Create a chain that takes conversation history and returns documents.
- chains.loading.load_chain(path, **kwargs)
 Unified method for loading a chain from LangChainHub or local fs.
- chains.loading.load_chain_from_config(…)
 Load chain from Config Dict.
- chains.openai_functions.base.create_openai_fn_chain(…)
 [Deprecated] [Legacy] Create an LLM chain that uses OpenAI functions.
- chains.openai_functions.base.create_structured_output_chain(…)
 [Deprecated] [Legacy] Create an LLMChain that uses an OpenAI function to get a structured output.
- chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)
 Create a citation fuzzy match chain.
- chains.openai_functions.extraction.create_extraction_chain(…)
 [Deprecated] Creates a chain that extracts information from a passage.
- chains.openai_functions.extraction.create_extraction_chain_pydantic(…)
 [Deprecated] Creates a chain that extracts information from a passage using pydantic schema.
- chains.openai_functions.openapi.get_openapi_chain(spec)
 Create a chain for querying an API from a OpenAPI spec.
- chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)
 Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI
- chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)
 Create a question answering chain that returns an answer with sources.
- chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(…)
 Create a question answering chain that returns an answer with sources
- chains.openai_functions.tagging.create_tagging_chain(…)
 Create a chain that extracts information from a passage
- chains.openai_functions.tagging.create_tagging_chain_pydantic(…)
 Create a chain that extracts information from a passage
- chains.openai_functions.utils.get_llm_kwargs(…)
 Return the kwargs for the LLMChain constructor.
- chains.openai_tools.extraction.create_extraction_chain_pydantic(…)
 [Deprecated] Creates a chain that extracts information from a passage.
- chains.prompt_selector.is_chat_model(llm)
 Check if the language model is a chat model.
- chains.prompt_selector.is_llm(llm)
 Check if the language model is a LLM.
- chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)
 Load a question answering with sources chain.
- chains.query_constructor.base.construct_examples(…)
 Construct examples from input-output pairs.
- chains.query_constructor.base.fix_filter_directive(…)
 Fix invalid filter directive.
- chains.query_constructor.base.get_query_constructor_prompt(…)
 Create query construction prompt.
- chains.query_constructor.base.load_query_constructor_chain(…)
 Load a query constructor chain.
- chains.query_constructor.base.load_query_constructor_runnable(…)
 Load a query constructor runnable chain.
- chains.query_constructor.parser.get_parser([…])
 Return a parser for the query language.
- chains.query_constructor.parser.v_args(…)
 Dummy decorator for when lark is not installed.
- chains.question_answering.chain.load_qa_chain(llm)
 Load question answering chain.
- chains.retrieval.create_retrieval_chain(…)
 Create retrieval chain that retrieves documents and then passes them on.
- chains.sql_database.query.create_sql_query_chain(llm, db)
 Create a chain that generates SQL queries.
- chains.structured_output.base.create_openai_fn_runnable(…)
 [Deprecated] Create a runnable sequence that uses OpenAI functions.
- chains.structured_output.base.create_structured_output_runnable(…)
 [Deprecated] Create a runnable for extracting structured outputs.
- chains.structured_output.base.get_openai_output_parser(…)
 Get the appropriate function output parser given the user functions.
- chains.summarize.chain.load_summarize_chain(llm)
 Load summarizing chain.
4、langchain.embeddings
Embedding models are wrappers around embedding models from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings  # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes
- embeddings.cache.CacheBackedEmbeddings(…)
 Interface for caching results from embedding models.
5、langchain.evaluation
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators or load_evaluator functions with the names of the evaluators to load.
from langchain.evaluation import load_evaluatorevaluator = load_evaluator("qa")
evaluator.evaluate_strings(prediction="We sold more than 40,000 units last week",input="How many units did we sell last week?",reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
- Grading the accuracy of a response against ground truth answers: QAEvalChain
- Comparing the output of two models: PairwiseStringEvalChainorLabeledPairwiseStringEvalChainwhen there is additionally a reference label.
- Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
- Checking whether an output complies with a set of criteria: CriteriaEvalChainorLabeledCriteriaEvalChainwhen there is additionally a reference label.
- Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChainor between two predictions:PairwiseEmbeddingDistanceEvalChain
- Measuring the string distance between a prediction and reference StringDistanceEvalChainor between two predictionsPairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
- StringEvaluator: Evaluate a prediction string against a reference label and/or input context.
- PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
- AgentTrajectoryEvaluatorEvaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes
- evaluation.agents.trajectory_eval_chain.TrajectoryEval
 A named tuple containing the score and reasoning for a trajectory.
- evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain
 A chain for evaluating ReAct style agents.
- evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
 Trajectory output parser.
- evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain
 A chain for comparing two outputs, such as the outputs
- evaluation.comparison.eval_chain.PairwiseStringEvalChain
 A chain for comparing two outputs, such as the outputs
- evaluation.comparison.eval_chain.PairwiseStringResultOutputParser
 A parser for the output of the PairwiseStringEvalChain.
- evaluation.criteria.eval_chain.Criteria(value)
 A Criteria to evaluate.
- evaluation.criteria.eval_chain.CriteriaEvalChain
 LLM Chain for evaluating runs against criteria.
- evaluation.criteria.eval_chain.CriteriaResultOutputParser
 A parser for the output of the CriteriaEvalChain.
- evaluation.criteria.eval_chain.LabeledCriteriaEvalChain
 Criteria evaluation chain that requires references.
- evaluation.embedding_distance.base.EmbeddingDistance(value)
 Embedding Distance Metric.
- evaluation.embedding_distance.base.EmbeddingDistanceEvalChain
 Use embedding distances to score semantic difference between a prediction and reference.
- evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain
 Use embedding distances to score semantic difference between two predictions.
- evaluation.exact_match.base.ExactMatchStringEvaluator(*)
 Compute an exact match between the prediction and the reference.
- evaluation.parsing.base.JsonEqualityEvaluator([…])
 Evaluate whether the prediction is equal to the reference after
- evaluation.parsing.base.JsonValidityEvaluator(…)
 Evaluate whether the prediction is valid JSON.
- evaluation.parsing.json_distance.JsonEditDistanceEvaluator([…])
 An evaluator that calculates the edit distance between JSON strings.
- evaluation.parsing.json_schema.JsonSchemaEvaluator(…)
 An evaluator that validates a JSON prediction against a JSON schema reference.
- evaluation.qa.eval_chain.ContextQAEvalChain
 LLM Chain for evaluating QA w/o GT based on context
- evaluation.qa.eval_chain.CotQAEvalChain
 LLM Chain for evaluating QA using chain of thought reasoning.
- evaluation.qa.eval_chain.QAEvalChain
 LLM Chain for evaluating question answering.
- evaluation.qa.generate_chain.QAGenerateChain
 LLM Chain for generating examples for question answering.
- evaluation.regex_match.base.RegexMatchStringEvaluator(*)
 Compute a regex match between the prediction and the reference.
- evaluation.schema.AgentTrajectoryEvaluator()
 Interface for evaluating agent trajectories.
- evaluation.schema.EvaluatorType(value)
 The types of the evaluators.
- evaluation.schema.LLMEvalChain
 A base class for evaluators that use an LLM.
- evaluation.schema.PairwiseStringEvaluator()
 Compare the output of two models (or two outputs of the same model).
- evaluation.schema.StringEvaluator()
 Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
- evaluation.scoring.eval_chain.LabeledScoreStringEvalChain
 A chain for scoring the output of a model on a scale of 1-10.
- evaluation.scoring.eval_chain.ScoreStringEvalChain
 A chain for scoring on a scale of 1-10 the output of a model.
- evaluation.scoring.eval_chain.ScoreStringResultOutputParser
 A parser for the output of the ScoreStringEvalChain.
- evaluation.string_distance.base.PairwiseStringDistanceEvalChain
 Compute string edit distances between two predictions.
- evaluation.string_distance.base.StringDistance(value)
 Distance metric to use.
- evaluation.string_distance.base.StringDistanceEvalChain
 Compute string distances between the prediction and the reference.
Functions
- evaluation.comparison.eval_chain.resolve_pairwise_criteria(…)
 Resolve the criteria for the pairwise evaluator.
- evaluation.criteria.eval_chain.resolve_criteria(…)
 Resolve the criteria to evaluate.
- evaluation.loading.load_dataset(uri)
 Load a dataset from the LangChainDatasets on HuggingFace.
- evaluation.loading.load_evaluator(evaluator, *)
 Load the requested evaluation chain specified by a string.
- evaluation.loading.load_evaluators(evaluators, *)
 Load evaluators specified by a list of evaluator types.
- evaluation.scoring.eval_chain.resolve_criteria(…)
 Resolve the criteria for the pairwise evaluator.
6、langchain.hub
Interface with the LangChain Hub.
Functions
- hub.pull(owner_repo_commit, *[, api_url, …])
 Pull an object from the hub and returns it as a LangChain object.
- hub.push(repo_full_name, object, *[, …])
 Push an object to the hub and returns the URL it can be viewed at in a browser.
7、langchain.indexes
Index is used to avoid writing duplicated content into the vectostore and to avoid over-writing content if it’s unchanged.
Indexes also :
- Create knowledge graphs from data.
- Support indexing workflows from LangChain data loaders to vectorstores.
Importantly, Index keeps on working even if the content being written is derived via a set of transformations from some source content (e.g., indexing children documents that were derived from parent documents by chunking.)
Classes
- indexes.vectorstore.VectorStoreIndexWrapper
 Wrapper around a vectorstore for easy access.
- indexes.vectorstore.VectorstoreIndexCreator
 Logic for creating indexes.
8、langchain.memory
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> BaseChatMemory --> <name>Memory  # Examples: ZepMemory, MotorheadMemory
Main helpers:
BaseChatMessageHistory
Chat Message History stores the chat message history in different stores.
Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory  # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes
- memory.buffer.ConversationBufferMemory
 Buffer for storing conversation memory.
- memory.buffer.ConversationStringBufferMemory
 Buffer for storing conversation memory.
- memory.buffer_window.ConversationBufferWindowMemory
 Buffer for storing conversation memory inside a limited size window.
- memory.chat_memory.BaseChatMemory
 Abstract base class for chat memory.
- memory.combined.CombinedMemory
 Combining multiple memories’ data together.
- memory.entity.BaseEntityStore
 Abstract base class for Entity store.
- memory.entity.ConversationEntityMemory
 Entity extractor & summarizer memory.
- memory.entity.InMemoryEntityStore
 In-memory Entity store.
- memory.entity.RedisEntityStore
 Redis-backed Entity store.
- memory.entity.SQLiteEntityStore
 SQLite-backed Entity store
- memory.entity.UpstashRedisEntityStore
 Upstash Redis backed Entity store.
- memory.readonly.ReadOnlySharedMemory
 Memory wrapper that is read-only and cannot be changed.
- memory.simple.SimpleMemory
 Simple memory for storing context or other information that shouldn’t ever change between prompts.
- memory.summary.ConversationSummaryMemory
 Conversation summarizer to chat memory.
- memory.summary.SummarizerMixin
 Mixin for summarizer.
- memory.summary_buffer.ConversationSummaryBufferMemory
 Buffer with summarizer for storing conversation memory.
- memory.token_buffer.ConversationTokenBufferMemory
 Conversation chat memory with token limit.
- memory.vectorstore.VectorStoreRetrieverMemory
 VectorStoreRetriever-backed memory.
Functions
- memory.utils.get_prompt_input_key(inputs, …)
 Get the prompt input key.
9、langchain.model_laboratory
Experiment with different models.
Classes
- model_laboratory.ModelLaboratory(chains[, names])
 Experiment with different models.
10、langchain.output_parsers
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser  # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes
- output_parsers.boolean.BooleanOutputParser
 Parse the output of an LLM call to a boolean.
- output_parsers.combining.CombiningOutputParser
 Combine multiple output parsers into one.
- output_parsers.datetime.DatetimeOutputParser
 Parse the output of an LLM call to a datetime.
- output_parsers.enum.EnumOutputParser
 Parse an output that is one of a set of values.
- output_parsers.fix.OutputFixingParser
 Wrap a parser and try to fix parsing errors.
- output_parsers.pandas_dataframe.PandasDataFrameOutputParser
 Parse an output using Pandas DataFrame format.
- output_parsers.regex.RegexParser
 Parse the output of an LLM call using a regex.
- output_parsers.regex_dict.RegexDictParser
 Parse the output of an LLM call into a Dictionary using a regex.
- output_parsers.retry.RetryOutputParser
 Wrap a parser and try to fix parsing errors.
- output_parsers.retry.RetryWithErrorOutputParser
 Wrap a parser and try to fix parsing errors.
- output_parsers.structured.ResponseSchema
 Schema for a response from a structured output parser.
- output_parsers.structured.StructuredOutputParser
 Parse the output of an LLM call to a structured output.
- output_parsers.yaml.YamlOutputParser
 Parse YAML output using a pydantic model.
Functions
- output_parsers.loading.load_output_parser(config)
 Load an output parser.
11、langchain.retrievers
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever  # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes
- retrievers.contextual_compression.ContextualCompressionRetriever
 Retriever that wraps a base retriever and compresses the results.
- retrievers.document_compressors.base.DocumentCompressorPipeline
 Document compressor that uses a pipeline of Transformers.
- retrievers.document_compressors.chain_extract.LLMChainExtractor
 Document compressor that uses an LLM chain to extract the relevant parts of documents.
- retrievers.document_compressors.chain_extract.NoOutputParser
 Parse outputs that could return a null string of some sort.
- retrievers.document_compressors.chain_filter.LLMChainFilter
 Filter that drops documents that aren’t relevant to the query.
- retrievers.document_compressors.cohere_rerank.CohereRerank
 [Deprecated] Document compressor that uses Cohere Rerank API.
- retrievers.document_compressors.cross_encoder.BaseCrossEncoder()
 Interface for cross encoder models.
- retrievers.document_compressors.cross_encoder_rerank.CrossEncoderReranker
 Document compressor that uses CrossEncoder for reranking.
- retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
 Document compressor that uses embeddings to drop documents unrelated to the query.
- retrievers.ensemble.EnsembleRetriever
 Retriever that ensembles the multiple retrievers.
- retrievers.merger_retriever.MergerRetriever
 Retriever that merges the results of multiple retrievers.
- retrievers.multi_query.LineListOutputParser
 Output parser for a list of lines.
- retrievers.multi_query.MultiQueryRetriever
 Given a query, use an LLM to write a set of queries.
- retrievers.multi_vector.MultiVectorRetriever
 Retrieve from a set of multiple embeddings for the same document.
- retrievers.multi_vector.SearchType(value)
 Enumerator of the types of search to perform.
- retrievers.parent_document_retriever.ParentDocumentRetriever
 Retrieve small chunks then retrieve their parent documents.
- retrievers.re_phraser.RePhraseQueryRetriever
 Given a query, use an LLM to re-phrase it.
- retrievers.self_query.base.SelfQueryRetriever
 Retriever that uses a vector store and an LLM to generate the vector store queries.
- retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever
 Retriever that combines embedding similarity with recency in retrieving values. |
Functions
- retrievers.document_compressors.chain_extract.default_get_input(…)
 Return the compression chain input.
- retrievers.document_compressors.chain_filter.default_get_input(…)
 Return the compression chain input.
- retrievers.ensemble.unique_by_key(iterable, key)
 Yield unique elements of an iterable based on a key function.
12、langchain.runnables
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnables inherently support synchronous, asynchronous, batch, and streaming operations.
Support for async allows servers hosting the LCEL based programs to scale better for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they’re being generated, allows for creating more responsive UX.
This module contains non-core Runnable classes.
Classes
- runnables.hub.HubRunnable
 An instance of a runnable stored in the LangChain Hub.
- runnables.openai_functions.OpenAIFunction
 A function description for ChatOpenAI
- runnables.openai_functions.OpenAIFunctionsRouter
 A runnable that routes to the selected function.
13、langchain.smith
LangSmith utilities.
This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.
Evaluation
LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators. An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:
from langsmith import Client
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():llm = ChatOpenAI(temperature=0)chain = LLMChain.from_string(llm,"What's the answer to {your_input_key}")return chain# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(evaluators=["qa",  # "Correctness" against a reference answer"embedding_distance",RunEvalConfig.Criteria("helpfulness"),RunEvalConfig.Criteria({"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"}),]
)client = Client()
run_on_dataset(client,"<my_dataset_name>",construct_chain,evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the StringEvaluator or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluatorclass MyStringEvaluator(StringEvaluator):@propertydef requires_input(self) -> bool:return False@propertydef requires_reference(self) -> bool:return True@propertydef evaluation_name(self) -> str:return "exact_match"def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:return {"score": prediction == reference}evaluation_config = RunEvalConfig(custom_evaluators = [MyStringEvaluator()],
)run_on_dataset(client,"<my_dataset_name>",construct_chain,evaluation=evaluation_config,
)
Primary Functions
- arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.
- run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.
- RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by- EvaluatorTypeor config, or you can pass in custom_evaluators
Classes
- smith.evaluation.config.EvalConfig
 Configuration for a given run evaluator.
- smith.evaluation.config.RunEvalConfig
 Configuration for a run evaluation.
- smith.evaluation.config.SingleKeyEvalConfig
 Configuration for a run evaluator that only requires a single key.
- smith.evaluation.progress.ProgressBarCallback(total)
 A simple progress bar for the console.
- smith.evaluation.runner_utils.ChatModelInput
 Input for a chat model.
- smith.evaluation.runner_utils.EvalError(…)
 Your architecture raised an error.
- smith.evaluation.runner_utils.InputFormatError
 Raised when the input format is invalid.
- smith.evaluation.runner_utils.TestResult
 A dictionary of the results of a single test run.
- smith.evaluation.string_run_evaluator.ChainStringRunMapper
 Extract items to evaluate from the run object from a chain.
- smith.evaluation.string_run_evaluator.LLMStringRunMapper
 Extract items to evaluate from the run object.
- smith.evaluation.string_run_evaluator.StringExampleMapper
 Map an example, or row in the dataset, to the inputs of an evaluation.
- smith.evaluation.string_run_evaluator.StringRunEvaluatorChain
 Evaluate Run and optional examples.
- smith.evaluation.string_run_evaluator.StringRunMapper
 Extract items to evaluate from the run object.
- smith.evaluation.string_run_evaluator.ToolStringRunMapper
 Map an input to the tool.
Functions
- smith.evaluation.name_generation.random_name()
 Generate a random name.
- smith.evaluation.runner_utils.arun_on_dataset(…)
 Run the Chain or language model on a dataset and store traces to the specified project name.
- smith.evaluation.runner_utils.run_on_dataset(…)
 Run the Chain or language model on a dataset and store traces to the specified project name.
14、langchain.storage
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes
- storage.encoder_backed.EncoderBackedStore(…)
 Wraps a store with key and value encoders/decoders.
- storage.file_system.LocalFileStore(root_path, *)
 BaseStore interface that works on the local file system.
2024-05-22(三)
Sam 的苏丹王榴莲千层真的好吃