Python调用谷歌Gemini大模型API的两种方法详解(兼容OpenAI)
官方文档:
(法一)OpenAI兼容:https://ai.google.dev/gemini-api/docs/openai?hl=zh-cn
(法二)Gemini API:https://ai.google.dev/gemini-api/docs/get-started/tutorial?hl=zh-cn&lang=python
目录
一、获取API_Key
二、编写调用代码
方法一:OpenAI兼容
方法二:Gemini API
一、获取API_Key
https://aistudio.google.com/app/apikey
二、编写调用代码
方法一:OpenAI兼容
from openai import OpenAI
client = OpenAI(
api_key="GEMINI_API_KEY",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)
response = client.chat.completions.create(
model="gemini-2.0-flash",
n=1, # 返回一个候选回答
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Explain to me how AI works"
}
]
)
print(response.choices[0].message)
方法二:Gemini API
1. 环境配置
pip install -q -U google-generativeai
老生常谈,如果使用远程服务器可能连不上外网,下载报错,用国内镜像源可以解决:
pip install -U google-generativeai -i https://pypi.tuna.tsinghua.edu.cn/simple
2. 调用API
import google.generativeai as genai
GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-2.0-flash')
response = model.generate_content("What is the meaning of life?")
print(response.text)
model 列表参考:https://ai.google.dev/gemini-api/docs/models?hl=zh-cn#gemini-2.0-flash-lite
提醒:如果用国内服务器不能成功调用(不知道有没有人遇到一样的情况),可以改用Google Colab运行。
作者:kkkkkkkkkasey