Skip to content

Openai

dadosfera.services.openai.generate_response

generate_response(prompt, model='gpt-3.5-turbo', temperature=0)

Generates a response using OpenAI's ChatGPT models based on a given prompt.

PARAMETER DESCRIPTION
prompt

The input text prompt to generate a response from

TYPE: str

model

The OpenAI model to use. Defaults to "gpt-3.5-turbo"

TYPE: str DEFAULT: 'gpt-3.5-turbo'

temperature

Controls randomness in the response. 0 is most deterministic, 1 is most creative. Defaults to 0

TYPE: float DEFAULT: 0

RETURNS DESCRIPTION
str

The generated response text from the model

TYPE: str

Example

response = generate_response("Summarize this article:", model="gpt-4", temperature=0.7) print(response)

Note
  • Requires the 'openai' library and valid API credentials
  • Uses ChatCompletion API which is optimized for dialogue
  • Lower temperature (0-0.3) is better for factual/analytical tasks
  • Higher temperature (0.7-1.0) is better for creative tasks
  • Does not handle API errors, caller should implement error handling
Source code in dadosfera/services/openai.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def generate_response(prompt: str, model: str = "gpt-3.5-turbo", temperature: float = 0) -> str:
    """
    Generates a response using OpenAI's ChatGPT models based on a given prompt.

    Args:
        prompt (str): The input text prompt to generate a response from
        model (str, optional): The OpenAI model to use. Defaults to "gpt-3.5-turbo"
        temperature (float, optional): Controls randomness in the response. 
            0 is most deterministic, 1 is most creative. Defaults to 0

    Returns:
        str: The generated response text from the model

    Example:
        response = generate_response("Summarize this article:", model="gpt-4", temperature=0.7)
        print(response)

    Note:
        - Requires the 'openai' library and valid API credentials
        - Uses ChatCompletion API which is optimized for dialogue
        - Lower temperature (0-0.3) is better for factual/analytical tasks
        - Higher temperature (0.7-1.0) is better for creative tasks
        - Does not handle API errors, caller should implement error handling
    """
    response = openai.ChatCompletion.create(
        model=model,
        temperature=temperature,
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content