Openai
dadosfera.services.openai.generate_response
generate_response(prompt, model='gpt-3.5-turbo', temperature=0)
Generates a response using OpenAI's ChatGPT models based on a given prompt.
PARAMETER | DESCRIPTION |
---|---|
prompt
|
The input text prompt to generate a response from
TYPE:
|
model
|
The OpenAI model to use. Defaults to "gpt-3.5-turbo"
TYPE:
|
temperature
|
Controls randomness in the response. 0 is most deterministic, 1 is most creative. Defaults to 0
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
str
|
The generated response text from the model
TYPE:
|
Example
response = generate_response("Summarize this article:", model="gpt-4", temperature=0.7) print(response)
Note
- Requires the 'openai' library and valid API credentials
- Uses ChatCompletion API which is optimized for dialogue
- Lower temperature (0-0.3) is better for factual/analytical tasks
- Higher temperature (0.7-1.0) is better for creative tasks
- Does not handle API errors, caller should implement error handling
Source code in dadosfera/services/openai.py
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|