Python 调用常见大模型 API 全解析
亲爱的小伙伴们😘,在求知的漫漫旅途中,若你对深度学习的奥秘、JAVA 、PYTHON与SAP 的奇妙世界,亦或是读研论文的撰写攻略有所探寻🧐,那不妨给我一个小小的关注吧🥰。我会精心筹备,在未来的日子里不定期地为大家呈上这些领域的知识宝藏与实用经验分享🎁。每一个点赞👍,都如同春日里的一缕阳光,给予我满满的动力与温暖,让我们在学习成长的道路上相伴而行,共同进步✨。期待你的关注与点赞哟🤗!
调用通义千问接口
dashscope
库,在命令行中执行 pip install dashscope
进行安装13。qwen-max
模型来生成文本。
import dashscope
def call_with_stream():
messages = ({'role': 'user', 'content': '介绍一下你自己'})
responses = dashscope.Generation.call("qwen-max",
messages=messages,
result_format='message',
stream=True,
incremental_output=True)
for response in responses:
if response.status_code == 200:
print(response.output.choices(0)['message']['content'], end='')
else:
print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
response.request_id, response.status_code,
response.code, response.message))
if __name__ == '__main__':
call_with_stream()
调用智谱接口
pip install zhipuai
安装智谱 AI 的 Python SDK2。glm-4
模型的示例代码2 。
from zhipuai import ZhipuAI
client = ZhipuAI(api_key="your_api_key")
response = client.chat.completions.create(
model="glm-4",
messages=(
{"role": "user", "content": "你好!你叫什么名字"},
),
stream=True,
)
for chunk in response:
print(chunk.choices(0).delta)
调用讯飞星火接口
pip install websocket
pip install websocket-client
import sparkapi
appid = "your_app_id"
api_secret = "your_api_secret"
api_key = "your_api_key"
domain = "generalv3"
spark_url = "ws://spark-api.xf-yun.com/v3.1/chat"
def gettext(role, content):
jsoncon = {}
jsoncon["role"] = role
jsoncon["content"] = content
return jsoncon
if __name__ == '__main__':
question = gettext("user", "你好,请问今天天气如何?")
sparkapi.main(appid, api_key, api_secret, spark_url, domain, [question])
调用百川大模型接口
获取 API 密钥:访问百川大模型的官网,注册账号并登录后,进入个人中心或 API 管理页面申请 API Key1。
Baichuan2-Turbo
模型来生成文本。import requests
import json
url = "https://api.baichuan-ai.com/v1/chat/completions"
api_key = "your_api_key"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"model": "Baichuan2-Turbo",
"messages": [
{
"role": "user",
"content": "你好,给我介绍一下人工智能的发展历程。"
}
],
"stream": False,
"temperature": 0.3,
"top_p": 0.85,
"top_k": 5,
"with_search_enhance": False
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
result = json.loads(response.text)
print(result["choices"][0]["message"]["content"])
else:
print(f"请求失败,状态码: {response.status_code}")
调用本地部署的大模型接口
ollama run <model_name>
,例如 ollama run qwen2.5
启动千问 2.5 模型,并确保可以在浏览器中通过 http://127.0.0.1:11434/
正常访问本地 api 接口。import requests
import json
def translate_text(content):
url = "http://localhost:11434/api/chat"
data = {
"model": "qwen2.5",
"messages": (
{
"role": "user",
"content": f"请将以下中文翻译为英文:{content}"
},
),
"stream": True
}
response = requests.post(url, json=data, stream=True)
print(response.status_code)
if response.status_code == 200:
full_response = ''
for line in response.iter_lines():
if line:
try:
json_object = json.loads(line)
if 'message' in json_object and 'content' in json_object['message']:
chunk = json_object['message']['content']
full_response += chunk
print(chunk, end=' ', flush=True)
except json.decoder.JSONDecodeError:
pass
print()
return full_response
else:
print(f"error: {response.status_code}")
return response.text
if __name__ == "__main__":
user_input = '你好,帮我翻译,测试翻译功能'
result = translate_text(user_input)
作者:♢.*