Mistral大模型:Getting Started With Mistral

Getting Started With Mistral

本文是学习 https://www.deeplearning.ai/short-courses/getting-started-with-mistral/ 这门课的学习笔记。

在这里插入图片描述

What you’ll learn in this course

In this course, you’ll access Mistral AI’s collection of open source and commercial models, including the Mixtral 8x7B model, and the latest Mixtral 8x22B. You’ll learn about selecting the right model for your use case, and get hands-on with features like effective prompting techniques, function calling, JSON mode, and Retrieval Augmented Generation (RAG).

In detail:

  • Access and prompt Mistral models via API calls for tasks, and decide whether your task is either a simple task (classification), medium (email writing), or advanced (coding) level of complexity, and consider speed requirements to choose an appropriate model.
  • Learn to use Mistral’s native function calling, in which you give an LLM tools it can call as needed to perform tasks that are better performed by traditional code, such as querying a database for numerical data.
  • Build a basic RAG system from scratch with similarity search, properly chunk data, create embeddings, and implement this tool as a function in your chat system.
  • Build a chat interface to interact with the Mistral models and ask questions about a document that you upload.

By the end of this course, you’ll be equipped to leverage Mistral AI’s leading open source and commercial models.

文章目录

  • Getting Started With Mistral
    • What you’ll learn in this course
  • Overview
  • Prompting Capabilities
      • Load API key and helper function
    • Classification
    • Information Extraction with JSON Mode
    • Personalization
    • Summarization
  • Model Selection
      • Get API Key
    • Mistral Small
    • Mistral Medium
    • Mistral Large:
    • Expense reporting task
    • Writing and checking code
    • Natively Fluent in English, French, Spanish, German, and Italian
    • List of Mistral models that you can call:
  • Function Calling
      • Load API key
    • Step 1. User: specify tools and query
      • Tools
      • functools
      • User query
    • Step 2. Model: Generate function arguments
      • Save the chat history
    • Step 3. User: Execute function to obtain tool results
    • Step 4. Model: Generate final answer
  • Basic RAG (Retrieval Augmented Generation)
      • Parse the article with BeautifulSoup
      • Optionally, save the text into a text file
      • Chunking
      • Get embeddings of the chunks
      • Store in a vector databsae
      • Embed the user query
      • Search for chunks that are similar to the query
    • RAG + Function calling
    • More about RAG
  • Chatbot
    • Panel
    • Basic Chat UI
    • RAG UI
      • Connect the Chat interface with your user-defined function
  • 后记

Overview

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

Prompting Capabilities

  • Note, you can try any of these prompts outside of this classroom, and without coding, by going to the chat interface Le Chat.
    • You can sign up with a free account.
    • Signing up for an account is not required to complete this course.
# !pip install mistralai

requirements.txt

# requirements file
# note which revision of python, for example 3.9.6
# in this file, insert all the pip install needs, include revisionpython-dotenv==1.0.1
mistralai==0.1.6
pandas==2.2.1
faiss-cpu==1.8.0
langchain==0.1.12
langchain-mistralai==0.0.5
llama-index==0.10.19
llama-index-embeddings-mistralai==0.1.4
llama-index-llms-mistralai==0.1.6
mistral-haystack==0.0.1
panel==1.4.0

Load API key and helper function

  • Note: You can view or download the helper.py file by clicking on the “Jupyter” logo to access the file directory.
from helper import load_mistral_api_key
load_mistral_api_key()from helper import mistral
mistral("hello, what can you do?")

Output

'Hello! I can assist with a variety of tasks. Here are a few examples:\n\n1. Answer questions: I can provide information on a wide range of topics, from general knowledge to specific queries.\n2. Set reminders and alarms: I can help you remember important dates or tasks.\n3. Provide news and weather updates: I can give you the latest news and weather information.\n4. Play music and videos: I can play your favorite songs or videos.\n5. Control smart home devices: If you have smart home devices, I can help you control them.\n6. Send messages and make calls: I can help you send messages or make calls to your contacts.'

其中helper.py如下

import os
from dotenv import load_dotenv, find_dotenv     from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessageapi_key = None
dlai_endpoint = None
client = Nonedef load_env():_ = load_dotenv(find_dotenv())def load_mistral_api_key(ret_key=False):load_env()global api_keyglobal dlai_endpointapi_key = os.getenv("MISTRAL_API_KEY")dlai_endpoint = os.getenv("DLAI_MISTRAL_API_ENDPOINT")global clientclient = MistralClient(api_key=api_key, endpoint=dlai_endpoint)if ret_key:return api_key, dlai_endpointdef mistral(user_message, model="mistral-small-latest",is_json=False):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)messages = [ChatMessage(role="user", content=user_message)]if is_json:chat_response = client.chat(model=model, messages=messages,response_format={"type": "json_object"})else:chat_response = client.chat(model=model, messages=messages)return chat_response.choices[0].message.contentdef get_text_embedding(txt):global clientembeddings_batch_response = client.embeddings(model="mistral-embed", input=txt)return embeddings_batch_response.data[0].embeddingimport requests
from bs4 import BeautifulSoup
import re
def get_web_article_text(url, file_save_name=None):response = requests.get(url)html_doc = response.textsoup = BeautifulSoup(html_doc, 'html.parser')tag = soup.find("div", re.compile("^prose--styled"))text = tag.textif file_save_name:f = open(file_save_name, "w")f.write(text)f.close()return text

Classification

prompt = """You are a bank customer service bot. Your task is to assess customer intent and categorize customer inquiry after <<<>>> into one of the following predefined categories:card arrivalchange pinexchange ratecountry support cancel transfercharge disputeIf the text doesn't fit into any of the above categories, classify it as:customer serviceYou will only respond with the predefined category. Do not provide explanations or notes. ###Here are some examples:Inquiry: How do I know if I will get my card, or if it is lost? I am concerned about the delivery process and would like to ensure that I will receive my card as expected. Could you please provide information about the tracking process for my card, or confirm if there are any indicators to identify if the card has been lost during delivery?Category: card arrivalInquiry: I am planning an international trip to Paris and would like to inquire about the current exchange rates for Euros as well as any associated fees for foreign transactions.Category: exchange rate Inquiry: What countries are getting support? I will be traveling and living abroad for an extended period of time, specifically in France and Germany, and would appreciate any information regarding compatibility and functionality in these regions.Category: country supportInquiry: Can I get help starting my computer? I am having difficulty starting my computer, and would appreciate your expertise in helping me troubleshoot the issue. Category: customer service###<<<Inquiry: {inquiry}>>>Category:
"""

Ask Mistral to check the spelling and grammar of your prompt

response = mistral(f"Please correct the spelling and grammar of \
this prompt and return a text that is the same prompt,\
with the spelling and grammar fixed: {prompt}")
print(response)

Try out the model

mistral(response.format(inquiry="I am inquiring about the availability of your cards in the EU")
)

Output

'country support'

Information Extraction with JSON Mode

medical_notes = """
A 60-year-old male patient, Mr. Johnson, presented with symptoms
of increased thirst, frequent urination, fatigue, and unexplained
weight loss. Upon evaluation, he was diagnosed with diabetes,
confirmed by elevated blood sugar levels. Mr. Johnson's weight
is 210 lbs. He has been prescribed Metformin to be taken twice daily
with meals. It was noted during the consultation that the patient is
a current smoker. 
"""
prompt = f"""
Extract information from the following medical notes:
{medical_notes}Return json format with the following JSON schema: {{"age": {{"type": "integer"}},"gender": {{"type": "string","enum": ["male", "female", "other"]}},"diagnosis": {{"type": "string","enum": ["migraine", "diabetes", "arthritis", "acne"]}},"weight": {{"type": "integer"}},"smoking": {{"type": "string","enum": ["yes", "no"]}}
}}
"""
response = mistral(prompt, is_json=True)
print(response)

Output

{"age": 60, "gender": "male", "diagnosis": "diabetes", "weight": 210, "smoking": "yes"}

Personalization

email = """
Dear mortgage lender, What's your 30-year fixed-rate APR, how is it compared to the 15-year 
fixed rate?Regards,
Anna
"""
prompt = f"""You are a mortgage lender customer service bot, and your task is to 
create personalized email responses to address customer questions.
Answer the customer's inquiry using the provided facts below. Ensure 
that your response is clear, concise, and directly addresses the 
customer's question. Address the customer in a friendly and 
professional manner. Sign the email with "Lender Customer Support."   # Facts
30-year fixed-rate: interest rate 6.403%, APR 6.484%
20-year fixed-rate: interest rate 6.329%, APR 6.429%
15-year fixed-rate: interest rate 5.705%, APR 5.848%
10-year fixed-rate: interest rate 5.500%, APR 5.720%
7-year ARM: interest rate 7.011%, APR 7.660%
5-year ARM: interest rate 6.880%, APR 7.754%
3-year ARM: interest rate 6.125%, APR 7.204%
30-year fixed-rate FHA: interest rate 5.527%, APR 6.316%
30-year fixed-rate VA: interest rate 5.684%, APR 6.062%# Email
{email}
"""
response = mistral(prompt)
print(response)

Output

Subject: Re: Inquiry Regarding 30-Year and 15-Year Fixed-Rate MortgagesDear Anna,Thank you for reaching out to us.I'm happy to provide you with the information you need.Our 30-year fixed-rate mortgage has an annual percentage rate (APR) of 6.484%. In comparison, our 15-year fixed-rate mortgage has a lower APR of 5.848%. This difference is due to the shorter loan term, which allows for more interest to be paid off over a shorter period of time.Please let us know if you have any other questions or if there's anything else we can assist you with.Best regards,Lender Customer Support

Summarization

  • We’ll use this article from The Batch
newsletter = """
European AI champion Mistral AI unveiled new large language models and formed an alliance with Microsoft. What’s new: Mistral AI introduced two closed models, Mistral Large and Mistral Small (joining Mistral Medium, which debuted quietly late last year). Microsoft invested $16.3 million in the French startup, and it agreed to distribute Mistral Large on its Azure platform and let Mistral AI use Azure computing infrastructure. Mistral AI makes the new models available to try for free here and to use on its La Plateforme and via custom deployments.Model specs: The new models’ parameter counts, architectures, and training methods are undisclosed. Like the earlier, open source Mistral 7B and Mixtral 8x7B, they can process 32,000 tokens of input context. Mistral Large achieved 81.2 percent on the MMLU benchmark, outperforming Anthropic’s Claude 2, Google’s Gemini Pro, and Meta’s Llama 2 70B, though falling short of GPT-4. Mistral Small, which is optimized for latency and cost, achieved 72.2 percent on MMLU.
Both models are fluent in French, German, Spanish, and Italian. They’re trained for function calling and JSON-format output.
Microsoft’s investment in Mistral AI is significant but tiny compared to its $13 billion stake in OpenAI and Google and Amazon’s investments in Anthropic, which amount to $2 billion and $4 billion respectively.
Mistral AI and Microsoft will collaborate to train bespoke models for customers including European governments.
Behind the news: Mistral AI was founded in early 2023 by engineers from Google and Meta. The French government has touted the company as a home-grown competitor to U.S.-based leaders like OpenAI. France’s representatives in the European Commission argued on Mistral’s behalf to loosen the European Union’s AI Act oversight on powerful AI models. Yes, but: Mistral AI’s partnership with Microsoft has divided European lawmakers and regulators. The European Commission, which already was investigating Microsoft’s agreement with OpenAI for potential breaches of antitrust law, plans to investigate the new partnership as well. Members of President Emmanuel Macron’s Renaissance party criticized the deal’s potential to give a U.S. company access to European users’ data. However, other French lawmakers support the relationship.Why it matters: The partnership between Mistral AI and Microsoft gives the startup crucial processing power for training large models and greater access to potential customers around the world. It gives the tech giant greater access to the European market. And it gives Azure customers access to a high-performance model that’s tailored to Europe’s unique regulatory environment.We’re thinking: Mistral AI has made impressive progress in a short time, especially relative to the resources at its disposal as a startup. Its partnership with a leading hyperscaler is a sign of the tremendous processing and distribution power that remains concentrated in the large, U.S.-headquartered cloud companies.
"""
prompt = f"""
You are a commentator. Your task is to write a report on a newsletter. 
When presented with the newsletter, come up with interesting questions to ask,
and answer each question. 
Afterward, combine all the information and write a report in the markdown
format. # Newsletter: 
{newsletter}# Instructions: 
## Summarize:
In clear and concise language, summarize the key points and themes 
presented in the newsletter.## Interesting Questions: 
Generate three distinct and thought-provoking questions that can be 
asked about the content of the newsletter. For each question:
- After "Q: ", describe the problem 
- After "A: ", provide a detailed explanation of the problem addressed 
in the question.
- Enclose the ultimate answer in <>.## Write a analysis report
Using the summary and the answers to the interesting questions, 
create a comprehensive report in Markdown format. 
"""
response = mistral(prompt)
print(response)

Output

# Summary:
- Mistral AI, a European AI champion, introduced new large language models, Mistral Large and Mistral Small.
- Microsoft invested $16.3 million in Mistral AI and will distribute Mistral Large on its Azure platform, while Mistral AI will use Azure computing infrastructure.
- Mistral Large outperformed several models on the MMLU benchmark but fell short of GPT-4. Both models are fluent in four European languages and trained for function calling.
- The partnership between Mistral AI and Microsoft has sparked controversy among European lawmakers due to potential antitrust breaches and data access concerns.# Interesting Questions:## Q: How does Mistral AI's performance on the MMLU benchmark compare to other models?
A: Mistral AI's new models have demonstrated impressive performance on the MMLU benchmark. Mistral Large achieved 81.2 percent, outperforming Anthropic’s Claude 2, Google’s Gemini Pro, and Meta’s Llama 2 70B. However, it fell short of GPT-4. Mistral Small, optimized for latency and cost, achieved 72.2 percent.## Q: What are the concerns surrounding Mistral AI's partnership with Microsoft?
A: The partnership between Mistral AI and Microsoft has raised concerns among European lawmakers and regulators. The European Commission plans to investigate the partnership for potential breaches of antitrust law, as it already is investigating Microsoft’s agreement with OpenAI. Additionally, there are concerns that the partnership may give Microsoft access to European users’ data.## Q: What does the partnership between Mistral AI and Microsoft mean for the European market?
A: The partnership between Mistral AI and Microsoft has significant implications for the European market. It provides Mistral AI with the processing power needed to train large models and access to potential customers worldwide. For Microsoft, it expands its access to the European market. For Azure customers, it offers a high-performance model tailored to Europe’s unique regulatory environment.# Report:Mistral AI, a European AI champion, has made significant strides with the introduction of two new large language models, Mistral Large and Mistral Small. These models have shown impressive performance on the MMLU benchmark, outperforming several other models but falling short of GPT-4. Microsoft's investment in Mistral AI and agreement to distribute Mistral Large on its Azure platform will provide the startup with crucial processing power and access to potential customers worldwide.However, the partnership between Mistral AI and Microsoft has not been without controversy. European lawmakers and regulators have raised concerns about potential antitrust breaches and access to European users’ data. The European Commission plans to investigate the partnership, adding to its existing investigation into Microsoft’s agreement with OpenAI.Despite these concerns, the partnership between Mistral AI and Microsoft holds significant implications for the European market. It provides Mistral AI with the resources needed to compete with U.S.-based AI leaders, while giving Microsoft greater access to the European market. For Azure customers, the partnership offers a high-performance model tailored to Europe’s unique regulatory environment.In conclusion, Mistral AI's progress, especially given its status as a startup, is commendable. Its partnership with Microsoft, however, highlights the concentration of processing and distribution power in large, U.S.-headquartered cloud companies. The ensuing debate around data access and antitrust law underscores the complexities of technology partnerships in today's globalized world.

Model Selection

在这里插入图片描述

Performance

在这里插入图片描述

Choose different model

在这里插入图片描述

Get API Key

from helper import load_mistral_api_key
api_key, dlai_endpoint = load_mistral_api_key(ret_key=True)
  • Note: in the classroom, if you print out this api_key variable, it is not a real API key (for security reasons).
  • If you wish to run this code on your own machine, outside of the classroom, you can still reuse the code that you see in helper.py.
  • It uses python-dotenv library to securely save and load sensitive information such as API keys.
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessagedef mistral(user_message, model="mistral-small-latest", is_json=False):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)messages = [ChatMessage(role="user", content=user_message)]if is_json:chat_response = client.chat(model=model, messages=messages, response_format={"type": "json_object"})else:chat_response = client.chat(model=model, messages=messages)return chat_response.choices[0].message.content

Mistral Small

Good for simple tasks, fast inference, lower cost.

  • classification
prompt = """
Classify the following email to determine if it is spam or not.
Only respond with the exact text "Spam" or "Not Spam". # Email:
🎉 Urgent! You've Won a $1,000,000 Cash Prize! 
💰 To claim your prize, please click on the link below: 
https://bit.ly/claim-your-prize
"""
mistral(prompt, model="mistral-small-latest")

Output

'Spam'

Mistral Medium

Good for intermediate tasks such as language transformation.

  • Composing text based on provided context (e.g. writing a customer service email based on purchase information).
prompt = """
Compose a welcome email for new customers who have just made 
their first purchase with your product. 
Start by expressing your gratitude for their business, 
and then convey your excitement for having them as a customer. 
Include relevant details about their recent order. 
Sign the email with "The Fun Shop Team".Order details:
- Customer name: Anna
- Product: hat 
- Estimate date of delivery: Feb. 25, 2024
- Return policy: 30 days
"""
response_medium = mistral(prompt, model="mistral-medium-latest")
print(response_medium)

Output

Subject: Welcome to The Fun Shop - Thank You for Your First Purchase!Dear Anna,We hope this email finds you well! We are absolutely thrilled to have you as a part of The Fun Shop family. We truly appreciate your trust in us and your decision to make your first purchase with us.We wanted to personally thank you for choosing our hat as your first product. We are confident that you will love it and that it will bring you joy and happiness.Your order is currently being processed and is estimated to be delivered to you on February 25, 2024. We will make sure to keep you updated on the status of your order and will notify you as soon as it has been shipped.In the meantime, we would like to remind you of our 30-day return policy. If for any reason, you are not completely satisfied with your purchase, please don't hesitate to reach out to us. We will do everything we can to make it right.Once again, thank you for choosing The Fun Shop. We are excited to have you as a customer and can't wait to see you in your new hat!Best regards,
The Fun Shop Team

Mistral Large:

Good for complex tasks that require advanced reasoning.

  • Math and reasoning with numbers.
prompt = """
Calculate the difference in payment dates between the two \
customers whose payment amounts are closest to each other \
in the following dataset. Do not write code.# dataset: 
'{"transaction_id":{"0":"T1001","1":"T1002","2":"T1003","3":"T1004","4":"T1005"},"customer_id":{"0":"C001","1":"C002","2":"C003","3":"C002","4":"C001"},"payment_amount":{"0":125.5,"1":89.99,"2":120.0,"3":54.3,"4":210.2},
"payment_date":{"0":"2021-10-05","1":"2021-10-06","2":"2021-10-07","3":"2021-10-05","4":"2021-10-08"},"payment_status":{"0":"Paid","1":"Unpaid","2":"Paid","3":"Paid","4":"Pending"}
}'
"""

Using mistral small

response_small = mistral(prompt, model="mistral-small-latest")
print(response_small)

Output

To solve this, let's first find the two customers whose payment amounts are closest to each other:1. Sort the payment amounts in ascending order: 54.3, 89.99, 120.0, 125.5, 210.2
2. The two closest amounts are 89.99 and 120.0.Now, let's find the difference in payment dates for these two customers:1. Look up the payment dates for customer_ids C002 and C003.
2. The payment date for C002 is "2021-10-06" and the payment date for C003 is "2021-10-07".
3. Calculate the difference in payment dates: 2021-10-07 - 2021-10-06 = 1 day.

Using mistral large

response_large = mistral(prompt, model="mistral-large-latest")
print(response_large)

Output

First, let's identify the two closest payment amounts:1. T1001 (C001) pays $125.50
2. T1002 (C002) pays $89.99
3. T1003 (C003) pays $120.00
4. T1004 (C002) pays $54.30
5. T1005 (C001) pays $210.20The two closest payment amounts are $125.50 (T1001) and $120.00 (T1003) with a difference of $5.50.Now, let's calculate the difference between the payment dates of these two transactions:1. T1001 was paid on 2021-10-05
2. T1003 was paid on 2021-10-07To find the difference between two dates, subtract the earlier date from the later date. Here, the difference between 2021-10-07 and 2021-10-05 is 2 days.So, the difference in payment dates between the two customers whose payment amounts are closest to each other is 2 days.

Expense reporting task

transactions = """
McDonald's: 8.40
Safeway: 10.30
Carrefour: 15.00
Toys R Us: 20.50
Panda Express: 10.20
Beanie Baby Outlet: 25.60
World Food Wraps: 22.70
Stuffed Animals Shop: 45.10
Sanrio Store: 85.70
"""prompt = f"""
Given the purchase details, how much did I spend on each category:
1) restaurants
2) groceries
3) stuffed animals and props
{transactions}
"""

mistral-small-latest

response_small = mistral(prompt, model="mistral-small-latest")
print(response_small)

Output

To calculate your spending in each category, I've assigned the purchases to the corresponding categories as follows:1) Restaurants: McDonald's and Panda Express
2) Groceries: Safeway and Carrefour
3) Stuffed animals and props: Toys R Us, Beanie Baby Outlet, Stuffed Animals Shop, and Sanrio StoreHere's the breakdown of your spending in each category:1) Restaurants:- McDonald's: $8.40- Panda Express: $10.20Total spent on restaurants: $18.602) Groceries:- Safeway: $10.30- Carrefour: $15.00Total spent on groceries: $25.303) Stuffed animals and props:- Toys R Us: $20.50- Beanie Baby Outlet: $25.60- Stuffed Animals Shop: $45.10- Sanrio Store: $85.70Total spent on stuffed animals and props: $176.90

mistral-large-latest

response_large = mistral(prompt, model="mistral-large-latest")
print(response_large)

Output

Sure, I can help you categorize your spending. Here's how much you spent on each category:1) Restaurants:- McDonald's: $8.40- Panda Express: $10.20- World Food Wraps: $22.70Total spent on restaurants: $8.40 + $10.20 + $22.70 = $41.302) Groceries:- Safeway: $10.30- Carrefour: $15.00Total spent on groceries: $10.30 + $15.00 = $25.303) Stuffed animals and props:- Toys R Us: $20.50- Beanie Baby Outlet: $25.60- Stuffed Animals Shop: $45.10- Sanrio Store: $85.70Total spent on stuffed animals and props: $20.50 + $25.60 + $45.10 + $85.70 = $176.90So, you spent $41.30 on restaurants, $25.30 on groceries, and $176.90 on stuffed animals and props.

Writing and checking code

user_message = """
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.You may assume that each input would have exactly one solution, and you may not use the same element twice.You can return the answer in any order.Your code should pass these tests:assert twoSum([2,7,11,15], 9) == [0,1]
assert twoSum([3,2,4], 6) == [1,2]
assert twoSum([3,3], 6) == [0,1]
"""
print(mistral(user_message, model="mistral-large-latest"))

Output

Here's a Python solution that uses a dictionary to store the previously seen numbers and their indices. It iterates through the array and checks if the current number's complement (target - current number) is in the dictionary. If it is, it returns the indices of the current number and the complement. If it's not, it adds the current number and its index to the dictionary.```python
def twoSum(nums, target):seen = {}for i, num in enumerate(nums):complement = target - numif complement in seen:return [seen[complement], i]seen[num] = i
```This solution has a time complexity of O(n) and a space complexity of O(n), where n is the length of the input array. It passes all the test cases provided:```python
assert twoSum([2,7,11,15], 9) == [0,1]
assert twoSum([3,2,4], 6) == [1,2]
assert twoSum([3,3], 6) == [0,1]
```

Natively Fluent in English, French, Spanish, German, and Italian

  • This means that you can use Mistral models for more than translating from one language to another.
  • If you are a native Spanish speaker, for instance, you can communicate with Mistral models in Spanish for any of your tasks.
user_message = """
Lequel est le plus lourd une livre de fer ou un kilogramme de plume
"""
print(mistral(user_message, model="mistral-large-latest"))

Output

Une livre de fer est plus légère qu'un kilogramme de plumes.Une livre (lb) est une unité de mesure de masse principalement utilisée aux États-Unis et dans d'autres systèmes de mesure impériaux, tandis qu'un kilogramme (kg) est l'unité de base de masse dans le système international d'unités (SI).1 livre est égale à environ 0,453592 kilogrammes. Donc, une livre de fer (0,453592 kg) est plus légère qu'un kilogramme de plumes (1 kg).La confusion dans cette question vient du fait que les plumes sont généralement considérées comme légères et le fer comme lourd, mais cela dépend de la quantité. Dans ce cas, la quantité de plumes (un kilogramme) est plus importante que la quantité de fer (une livre), ce qui en fait un poids plus lourd.

List of Mistral models that you can call:

You can also call the two open source mistral models via API calls.
Here is the list of models that you can try:

open-mistral-7b
open-mixtral-8x7b
open-mixtral-8x22b
mistral-small-latest
mistral-medium-latest
mistral-large-latest

For example:

mistral(prompt, model="open-mixtral-8x22b")

Note that we just released the open-mixtral-8x22b model. Check out our release blog for details.

Function Calling

在这里插入图片描述

Load API key

from helper import load_mistral_api_key
api_key, dlai_endpoint = load_mistral_api_key(ret_key=True)
import pandas as pd
data = {"transaction_id": ["T1001", "T1002", "T1003", "T1004", "T1005"],"customer_id": ["C001", "C002", "C003", "C002", "C001"],"payment_amount": [125.50, 89.99, 120.00, 54.30, 210.20],"payment_date": ["2021-10-05","2021-10-06","2021-10-07","2021-10-05","2021-10-08",],"payment_status": ["Paid", "Unpaid", "Paid", "Paid", "Pending"],
}
df = pd.DataFrame(data)

Output

在这里插入图片描述

How you might answer data questions without function calling

  • Not recommended, but an example to serve as a contrast to function calling.
data = """"transaction_id": ["T1001", "T1002", "T1003", "T1004", "T1005"],"customer_id": ["C001", "C002", "C003", "C002", "C001"],"payment_amount": [125.50, 89.99, 120.00, 54.30, 210.20],"payment_date": ["2021-10-05","2021-10-06","2021-10-07","2021-10-05","2021-10-08",],"payment_status": ["Paid", "Unpaid", "Paid", "Paid", "Pending"],
}
"""
transaction_id = "T1001"prompt = f"""
Given the following data, what is the payment status for \transaction_id={transaction_id}?data:
{data}"""
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessagedef mistral(user_message, model="mistral-small-latest", is_json=False):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)messages = [ChatMessage(role="user", content=user_message)]if is_json:chat_response = client.chat(model=model, messages=messages, response_format={"type": "json_object"})else:chat_response = client.chat(model=model, messages=messages)return chat_response.choices[0].message.content
response = mistral(prompt)
print(response)

Output

The payment status for transaction_id=T1001 is "Paid".

Step 1. User: specify tools and query

Tools

  • You can define all tools that you might want the model to call.
import jsondef retrieve_payment_status(df: data, transaction_id: str) -> str:if transaction_id in df.transaction_id.values:return json.dumps({"status": df[df.transaction_id == transaction_id].payment_status.item()})return json.dumps({"error": "transaction id not found."})status = retrieve_payment_status(df, transaction_id="T1001")
print(status)

Output

{"status": "Paid"}
def retrieve_payment_date(df: data, transaction_id: str) -> str:if transaction_id in df.transaction_id.values:return json.dumps({"date": df[df.transaction_id == transaction_id].payment_date.item()})return json.dumps({"error": "transaction id not found."})date = retrieve_payment_date(df, transaction_id="T1002")
print(date)

Output

{"date": "2021-10-06"}
  • You can outline the function specifications with a JSON schema.
tool_payment_status = {"type": "function","function": {"name": "retrieve_payment_status","description": "Get payment status of a transaction","parameters": {"type": "object","properties": {"transaction_id": {"type": "string","description": "The transaction id.",}},"required": ["transaction_id"],},},
}tool_payment_date = {"type": "function","function": {"name": "retrieve_payment_date","description": "Get payment date of a transaction","parameters": {"type": "object","properties": {"transaction_id": {"type": "string","description": "The transaction id.",}},"required": ["transaction_id"],},},
}tools = [tool_payment_status, tool_payment_date]

functools

import functoolsnames_to_functions = {"retrieve_payment_status": functools.partial(retrieve_payment_status, df=df),"retrieve_payment_date": functools.partial(retrieve_payment_date, df=df),
}names_to_functions["retrieve_payment_status"](transaction_id="T1001")

User query

  • Example: “What’s the status of my transaction?”
from mistralai.models.chat_completion import ChatMessagechat_history = [ChatMessage(role="user", content="What's the status of my transaction?")
]

Step 2. Model: Generate function arguments

from mistralai.client import MistralClientmodel = "mistral-large-latest"client = MistralClient(api_key=os.getenv("MISTRAL_API_KEY"), endpoint=os.getenv("DLAI_MISTRAL_API_ENDPOINT"))response = client.chat(model=model, messages=chat_history, tools=tools, tool_choice="auto"
)responseresponse.choices[0].message.content

Output

'To help you with that, I would need the transaction ID. Could you please provide it?'

Save the chat history

chat_history.append(ChatMessage(role="assistant", content=response.choices[0].message.content)
)
chat_history.append(ChatMessage(role="user", content="My transaction ID is T1001."))
chat_history

Output

ChatMessage(role='user', content="What's the status of my transaction?", name=None, tool_calls=None),ChatMessage(role='assistant', content='To help you with that, I would need the transaction ID. Could you please provide it?', name=None, tool_calls=None),ChatMessage(role='user', content='My transaction ID is T1001.', name=None, tool_calls=None)]
response = client.chat(model=model, messages=chat_history, tools=tools, tool_choice="auto"
)

Output

ChatCompletionResponse(id='0d16c96bc7ec4a6297be76d1ad8710bb', object='chat.completion', created=1717307759, model='mistral-large-latest', choices=[ChatCompletionResponseChoice(index=0, message=ChatMessage(role='assistant', content='', name=None, tool_calls=[ToolCall(id='4cSn7rumW', type=<ToolType.function: 'function'>, function=FunctionCall(name='retrieve_payment_status', arguments='{"transaction_id": "T1001"}'))]), finish_reason=<FinishReason.tool_calls: 'tool_calls'>)], usage=UsageInfo(prompt_tokens=193, total_tokens=223, completion_tokens=30))

Step 3. User: Execute function to obtain tool results

  • Currently, the user is the one who will execute these functions (the model will not execute these functions on its own).
tool_function = response.choices[0].message.tool_calls[0].function
print(tool_function)

Output

name='retrieve_payment_status' arguments='{"transaction_id": "T1001"}'
  • The function arguments are expected to be in a Python dictionary and not a string.
  • To make this string into a dictionary, you can use json.loads()
args = json.loads(tool_function.arguments)
print(args)

Output

{'transaction_id': 'T1001'}
  • Recall the functools dictionary that you made earlier
import functools
names_to_functions = {"retrieve_payment_status": functools.partial(retrieve_payment_status, df=df),"retrieve_payment_date": functools.partial(retrieve_payment_date, df=df),
}
function_result = names_to_functions[tool_function.name](**args)
function_result
  • The output of the function call can be saved as a chat message, with the role “tool”.
tool_msg = ChatMessage(role="tool", name=tool_function.name, content=function_result)
chat_history.append(tool_msg)chat_history

Output

[ChatMessage(role='user', content="What's the status of my transaction?", name=None, tool_calls=None),ChatMessage(role='assistant', content='To help you with that, I would need the transaction ID. Could you please provide it?', name=None, tool_calls=None),ChatMessage(role='user', content='My transaction ID is T1001.', name=None, tool_calls=None),ChatMessage(role='assistant', content='', name=None, tool_calls=[ToolCall(id='4cSn7rumW', type=<ToolType.function: 'function'>, function=FunctionCall(name='retrieve_payment_status', arguments='{"transaction_id": "T1001"}'))]),ChatMessage(role='tool', content='{"status": "Paid"}', name='retrieve_payment_status', tool_calls=None)]

Step 4. Model: Generate final answer

  • The model can now reply to the user query, given the information provided by the “tool”.
response = client.chat(model=model, messages=chat_history)
response.choices[0].message.content

Output

"The status of your transaction T1001 is 'Paid'. If you have any more questions or need further assistance, feel free to ask."

Basic RAG (Retrieval Augmented Generation)

What is RAG

在这里插入图片描述

# ! pip install faiss-cpu "mistralai>=0.1.2"

Parse the article with BeautifulSoup

import requests
from bs4 import BeautifulSoup
import reresponse = requests.get("https://www.deeplearning.ai/the-batch/a-roadmap-explores-how-ai-can-detect-and-mitigate-greenhouse-gases/"
)
html_doc = response.text
soup = BeautifulSoup(html_doc, "html.parser")
tag = soup.find("div", re.compile("^prose--styled"))
text = tag.text
print(text)

Optionally, save the text into a text file

  • You can upload the text file into a chat interface in the next lesson.
  • To download this file to your own machine, click on the “Jupyter” logo to view the file directory.
file_name = "AI_greenhouse_gas.txt"
with open(file_name, 'w') as file:file.write(text)

Chunking

chunk_size = 512
chunks = [text[i : i + chunk_size] for i in range(0, len(text), chunk_size)]
len(chunks)

Output

8

Get embeddings of the chunks

import os
from mistralai.client import MistralClientdef get_text_embedding(txt):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)embeddings_batch_response = client.embeddings(model="mistral-embed", input=txt)return embeddings_batch_response.data[0].embedding
import numpy as nptext_embeddings = np.array([get_text_embedding(chunk) for chunk in chunks])
len(text_embeddings[0])

Output

1024

Store in a vector databsae

  • In this classroom, you’ll use Faiss
import faissd = text_embeddings.shape[1]
index = faiss.IndexFlatL2(d)
index.add(text_embeddings)

Embed the user query

question = "What are the ways that AI can reduce emissions in Agriculture?"
question_embeddings = np.array([get_text_embedding(question)])

Search for chunks that are similar to the query

D, I = index.search(question_embeddings, k=2)
print(I)

Output

[[4 5]]
retrieved_chunk = [chunks[i] for i in I.tolist()[0]]
print(retrieved_chunk)

Output

['data to help factories use more recycled materials, cut waste, minimize energy use, and reduce downtime. Similarly, they can optimize supply chains to reduce emissions contributed by logistics.\xa0Agriculture.\xa0Farmers use AI-equipped sensors to simulate different crop rotations and weather events to forecast crop yield or loss. Armed with this data, food producers can cut waste and reduce carbon footprints. The authors cite lack of food-related datasets and investment in adapting farming practices as primary b', 'arriers to taking full advantage of AI in the food industry.Transportation.\xa0AI systems can reduce greenhouse-gas emissions by improving traffic flow, ameliorating congestion, and optimizing public transportation. Moreover, reinforcement learning can reduce the impact of electric vehicles on the power grid by optimizing their charging. More data, uniform standards, and AI talent are needed to realize this potential.Materials.\xa0Materials scientists use AI models to study traits of existing materials and design']
prompt = f"""
Context information is below.
---------------------
{retrieved_chunk}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {question}
Answer:
"""
from mistralai.models.chat_completion import ChatMessagedef mistral(user_message, model="mistral-small-latest", is_json=False):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)messages = [ChatMessage(role="user", content=user_message)]if is_json:chat_response = client.chat(model=model, messages=messages, response_format={"type": "json_object"})else:chat_response = client.chat(model=model, messages=messages)return chat_response.choices[0].message.content
response = mistral(prompt)
print(response)

Output

In Agriculture, AI can reduce emissions by helping farmers use AI-equipped sensors to simulate different crop rotations and weather events to forecast crop yield or loss. With this data, food producers can cut waste and reduce their carbon footprints. However, the authors cite lack of food-related datasets and investment in adapting farming practices as primary barriers to taking full advantage of AI in the food industry.

RAG + Function calling

def qa_with_context(text, question, chunk_size=512):# split document into chunkschunks = [text[i : i + chunk_size] for i in range(0, len(text), chunk_size)]# load into a vector databasetext_embeddings = np.array([get_text_embedding(chunk) for chunk in chunks])d = text_embeddings.shape[1]index = faiss.IndexFlatL2(d)index.add(text_embeddings)# create embeddings for a questionquestion_embeddings = np.array([get_text_embedding(question)])# retrieve similar chunks from the vector databaseD, I = index.search(question_embeddings, k=2)retrieved_chunk = [chunks[i] for i in I.tolist()[0]]# generate response based on the retrieve relevant text chunksprompt = f"""Context information is below.---------------------{retrieved_chunk}---------------------Given the context information and not prior knowledge, answer the query.Query: {question}Answer:"""response = mistral(prompt)return response
import functoolsnames_to_functions = {"qa_with_context": functools.partial(qa_with_context, text=text)}
tools = [{"type": "function","function": {"name": "qa_with_context","description": "Answer user question by retrieving relevant context","parameters": {"type": "object","properties": {"question": {"type": "string","description": "user question",}},"required": ["question"],},},},
]question = """
What are the ways AI can mitigate climate change in transportation?
"""client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)response = client.chat(model="mistral-large-latest",messages=[ChatMessage(role="user", content=question)],tools=tools,tool_choice="any",
)response

Output

ChatCompletionResponse(id='5f17e258ea714a919066b8aca82cc750', object='chat.completion', created=1717309362, model='mistral-large-latest', choices=[ChatCompletionResponseChoice(index=0, message=ChatMessage(role='assistant', content='', name=None, tool_calls=[ToolCall(id='uPQRDTjUQ', type=<ToolType.function: 'function'>, function=FunctionCall(name='qa_with_context', arguments='{"question": "What are the ways AI can mitigate climate change in transportation?"}'))]), finish_reason=<FinishReason.tool_calls: 'tool_calls'>)], usage=UsageInfo(prompt_tokens=92, total_tokens=126, completion_tokens=34))
tool_function = response.choices[0].message.tool_calls[0].function
tool_function

Output

FunctionCall(name='qa_with_context', arguments='{"question": "What are the ways AI can mitigate climate change in transportation?"}')
import jsonargs = json.loads(tool_function.arguments)
args

Output

{'question': 'What are the ways AI can mitigate climate change in transportation?'}
function_result = names_to_functions[tool_function.name](**args)
function_result

Output

'Based on the provided context, the specific ways AI can mitigate climate change in transportation are not explicitly mentioned. However, it is mentioned that AI has "high-potential opportunities" in various sectors including manufacturing, food production, and transportation. These opportunities are not detailed, but it\'s reasonable to infer that they could involve optimizing transportation routes for efficiency, developing more energy-efficient vehicles through AI-driven simulation, or utilizing AI for predictive maintenance to reduce waste and energy usage. Without more specific information, these are the most likely areas where AI could contribute to mitigating climate change in transportation.'

More about RAG

To learn about more advanced chunking and retrieval methods, you can check out:

  • Advanced Retrieval for AI with Chroma
    • Sentence window retrieval
    • Auto-merge retrieval
  • Building and Evaluating Advanced RAG Applications
    • Query Expansion
    • Cross-encoder reranking
    • Training and utilizing Embedding Adapters

Chatbot

import os
from mistralai.models.chat_completion import ChatMessage
from mistralai.client import MistralClient

Panel

Panel is an open source python library that you can use to create dashboards and apps.

import panel as pn
pn.extension()

Basic Chat UI

def run_mistral(contents, user, chat_interface):client = MistralClient(api_key=api_key, endpoint=dlai_endpoint)messages = [ChatMessage(role="user", content=contents)]chat_response = client.chat(model="mistral-large-latest", messages=messages)return chat_response.choices[0].message.content

chat_interface = pn.chat.ChatInterface(callback=run_mistral, callback_user="Mistral"
)chat_interface

Output

在这里插入图片描述

RAG UI

Below is the RAG code that you used in the previous lesson.

import requests
from bs4 import BeautifulSoup
import reresponse = requests.get("https://www.deeplearning.ai/the-batch/a-roadmap-explores-how-ai-can-detect-and-mitigate-greenhouse-gases/"
)
html_doc = response.text
soup = BeautifulSoup(html_doc, "html.parser")
tag = soup.find("div", re.compile("^prose--styled"))
text = tag.text
print(text)# Optionally save this text into a file.
file_name = "AI_greenhouse_gas.txt"
with open(file_name, 'w') as file:file.write(text)
import numpy as np
import faissclient = MistralClient(api_key=os.getenv("MISTRAL_API_KEY"),endpoint=os.getenv("DLAI_MISTRAL_API_ENDPOINT")
)prompt = """
Context information is below.
---------------------
{retrieved_chunk}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {question}
Answer:
"""def get_text_embedding(input):embeddings_batch_response = client.embeddings(model="mistral-embed", input=input)return embeddings_batch_response.data[0].embeddingdef run_mistral(user_message, model="mistral-large-latest"):messages = [ChatMessage(role="user", content=user_message)]chat_response = client.chat(model=model, messages=messages)return chat_response.choices[0].message.contentdef answer_question(question, user, instance):text = file_input.value.decode("utf-8")# split document into chunkschunk_size = 2048chunks = [text[i : i + chunk_size] for i in range(0, len(text), chunk_size)]# load into a vector databasetext_embeddings = np.array([get_text_embedding(chunk) for chunk in chunks])d = text_embeddings.shape[1]index = faiss.IndexFlatL2(d)index.add(text_embeddings)# create embeddings for a questionquestion_embeddings = np.array([get_text_embedding(question)])# retrieve similar chunks from the vector databaseD, I = index.search(question_embeddings, k=2)retrieved_chunk = [chunks[i] for i in I.tolist()[0]]# generate response based on the retrieved relevant text chunksresponse = run_mistral(prompt.format(retrieved_chunk=retrieved_chunk, question=question))return response

Connect the Chat interface with your user-defined function

  • Note, you can find some sample text files to upload to this RAG UI by clicking on the ‘Jupyter’ logo and to view the file directory of the lesson.
  • Or you can create any text file and copy-paste some text from a web article.
file_input = pn.widgets.FileInput(accept=".txt", value="", height=50)chat_interface = pn.chat.ChatInterface(callback=answer_question,callback_user="Mistral",header=pn.Row(file_input, "### Upload a text file to chat with it!"),
)
chat_interface.send("Send a message to get a reply from Mistral!", user="System", respond=False
)
chat_interface

Output

在这里插入图片描述

后记

2024年6月2日14点38分完成这门short course的学习。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/846341.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

物联网实战--平台篇之(十二)设备管理前端

目录 一、界面演示 二、设备列表 三、抖动单元格 四、设备模型 五、设备编辑 本项目的交流QQ群:701889554 物联网实战--入门篇https://blog.csdn.net/ypp240124016/category_12609773.html 物联网实战--驱动篇https://blog.csdn.net/ypp240124016/category_12631333.htm…

Java利用POI绘制表格

前提需求 最近公司要求写一些记录的表格&#xff0c;并且带有导出功能。再深入学习后&#xff0c;表格的底层其实就是list遍历塞值&#xff0c;导出功能的话可以由前端&#xff0c;后端实现&#xff0c;但技多不压身嘛&#xff0c;这里我自己就写了后端的导出功能&#xff0c;…

day-37 电话号码的字母组合

思路 设置一个char型的二维数组&#xff0c;每次从号码对应的子母中选出一个&#xff0c;拼接在一起即可 解题方法 注意&#xff1a;有的数字对应三个字母&#xff0c;有的对应四个字母 Code class Solution {public char arr[][]{{a,b,c, },{d,e,f, },{g,h,i, },{j,k,l, },…

虚拟现实环境下的远程教育和智能评估系统(三)

本周继续进行开发工具的选择与学习&#xff0c;基本了解了以下技术栈的部署应用&#xff1b; 一、Seata&#xff1a; Seata&#xff08;Simple Extensible Autonomous Transaction Architecture&#xff09;是一款开源的分布式事务解决方案&#xff0c;旨在提供高性能和简单易…

力扣575. 分糖果

题目&#xff1a; Alice 有 n 枚糖&#xff0c;其中第 i 枚糖的类型为 candyType[i] 。Alice 注意到她的体重正在增长&#xff0c;所以前去拜访了一位医生。 医生建议 Alice 要少摄入糖分&#xff0c;只吃掉她所有糖的 n / 2 即可&#xff08;n 是一个偶数&#xff09;。Alic…

智能监控技术助力山林生态养鸡:打造智慧安全的养殖新模式

随着现代科技的不断发展&#xff0c;智能化、自动化的养殖方式逐渐受到广大养殖户的青睐。特别是在山林生态养鸡领域&#xff0c;智能化监控方案的引入不仅提高了养殖效率&#xff0c;更有助于保障鸡只的健康与安全。视频监控系统EasyCVR视频汇聚/安防监控视频管理平台在山林生…

攻防世界---misc---心仪的公司

1、题目描述 2、下载附件是一个流量包 方法一&#xff1a; 1、用winhex分析&#xff0c;ctrlf搜索flag 2、尝试将搜索到的flag拿去提交&#xff0c;但是不对 3、担心flag不是长flag&#xff0c;做题多了你就会发现有些flag会是fl4g这种&#xff0c;为了可以稍微全面一点&…

6.2 休息日 背包问题总结

就目前所遇到的01背包与完全背包作总结。 01背包 有n件物品和一个最多能背重量为w 的背包。第i件物品的重量是weight[i]&#xff0c;得到的价值是value[i] 。每件物品只能用一次&#xff0c;求解将哪些物品装入背包里物品价值总和最大。 二维dp数组01背包 动规五部曲 1.确定…

IO流(2)

缓冲流 字节缓冲流 利用字节缓冲区拷贝文件&#xff0c;一次读取一个字节&#xff1a; public class test {public static void main(String [] args) throws IOException {//利用字节缓冲区来拷贝文件BufferedInputStream bisnew BufferedInputStream(new FileInputStream(&…

STM32作业实现(四)光敏传感器

目录 STM32作业设计 STM32作业实现(一)串口通信 STM32作业实现(二)串口控制led STM32作业实现(三)串口控制有源蜂鸣器 STM32作业实现(四)光敏传感器 STM32作业实现(五)温湿度传感器dht11 STM32作业实现(六)闪存保存数据 STM32作业实现(七)OLED显示数据 STM32作业实现(八)触摸按…

Java网络编程(上)

White graces&#xff1a;个人主页 &#x1f649;专栏推荐:Java入门知识&#x1f649; &#x1f649; 内容推荐:Java文件IO&#x1f649; &#x1f439;今日诗词:来如春梦几多时&#xff1f;去似朝云无觅处&#x1f439; ⛳️点赞 ☀️收藏⭐️关注&#x1f4ac;卑微小博主&a…

【Qt知识】disconnect

在Qt框架中&#xff0c;disconnect函数用于断开信号与槽之间的连接。当不再需要某个信号触发特定槽函数时&#xff0c;或者为了防止内存泄漏和重复执行问题&#xff0c;你可以使用disconnect来取消这种关联。disconnect函数的基本用法可以根据不同的需求采用多种形式&#xff0…

C++对C的增强

1、作用域运算符 ::解决归属问题&#xff08;谁是谁的谁&#xff09; 可以优先使用全局变量 2、命名空间 使用关键字namespace&#xff0c;控制标名称的作用域。 命名空间的本质&#xff1a;对符号常量、变量、函数、结构、枚举、类和对象等等进行封装 1、创建一个命名空间…

图解DSPy:Prompt的时代终结者?!

大模型技术论文不断&#xff0c;每个月总会新增上千篇。本专栏精选论文重点解读&#xff0c;主题还是围绕着行业实践和工程量产。若在某个环节出现卡点&#xff0c;可以回到大模型必备腔调重新阅读。而最新科技&#xff08;Mamba&#xff0c;xLSTM,KAN&#xff09;则提供了大模…

多元联合分布建模 Copula python实例

多元联合分布建模 Copula python实例 目录 库安装 实例可视化代码 库安装 pip install copulas 实例可视化代码 import numpy as np import pandas as pd from copulas.multivariate import GaussianMultivariate# Generate some example data np.random.seed(42) data = …

ChatTTS:开源最强文本转真人语音工具

目录 1.前言 2.详细介绍 2.1 什么是ChatTTS 2.2 项目地址: 2.3 应用特点: 3.如何安装和使用 3.1.谷歌colab 3.1.1.点击链接 3.1.2 进行保存 3.1.3 按照流程依次点击运行 3.1.4 填写自己需要转的文字 3.2 本地运行 3.2.1 下载或克隆项目源码到本地 3.2.2 …

算法每日一题(python,2024.05.31)

题目来源&#xff08;力扣. - 力扣&#xff08;LeetCode&#xff09;&#xff0c;简单&#xff09; 解题思路&#xff1a; 二次遍历&#xff0c;第一次遍历用哈希表记录每个字母的出现次数&#xff0c;出现一次则将它的value值赋为True&#xff0c;将它的下标赋为key值&#x…

HTTPS加密

一.加密是什么 加密就是把明文(要传输的信息)进行一系列的变换,生成密文. 有加密就有解密,解密就是把密文进行一系列的变换,生成明文. 在这个加密和解密过程中,往往需要一个或多个中间数据,辅助进行这个过程,这样的数据称为密钥. 加密解密到如今已经发展成了一个独立的学科 : 密…

基于Springboot开发的外卖餐购项目(后台管理+消费者端)

免费获取方式↓↓↓ 项目介绍039&#xff1a; 系统运行 后端登录页: http://localhost:8081/backend/page/login/login.html 消费端请求:消费端主页: http://localhost:8081/front/index.html 管理员账号 admin 123456 消费者不需要登录 采用技术栈 前端&#xff1a;Eleme…

力扣20 有效的括号

给定一个只包括 (&#xff0c;)&#xff0c;{&#xff0c;}&#xff0c;[&#xff0c;] 的字符串 s &#xff0c;判断字符串是否有效。 有效字符串需满足&#xff1a; 左括号必须用相同类型的右括号闭合。左括号必须以正确的顺序闭合。每个右括号都有一个对应的相同类型的左括…