GenAI

Preview

This feature is currently available as part of a public preview program. Please note that the API may change during this phase.

class GenAIMixin

This class defines all functions related to GenAI.

get_chat_completion(messages, instructions=None, model_id=None, config=None, tools=None, tool_choice=None, use_instructions_cache=False)

Gets the chat completion for the given messages.

Parameters:
  • messages (Sequence[Message]) – The chat messages.

  • instructions (str, optional) – The system prompt to provide instructions or context to the model.

  • model_id (ChatModel, optional) – The chat model. If omitted, the default model is used ChatModel.ANTHROPIC_CLAUDE_SONNET.

  • config (ChatInferenceConfig, optional) – The config with the inference parameters.

  • tools (Sequence[Tool], optional) – The tools that the model can use.

  • tool_choice (ToolChoice, optional) – Specify how the model should choose a tool.

  • use_instructions_cache (bool, optional) – Whether to use caching for the instruction prompt.

Returns:

The chat completion.

Return type:

ChatCompletion

Examples:
Basic usage:
>>> import blueconic
>>> from blueconic.domain.genai import Message
>>> bc = blueconic.Client()
>>> response = bc.get_chat_completion([
>>>   Message(Message.Role.USER, "What is Lorem Ipsum?")
>>> ])
>>> print(response.content[0].text)
"Lorem Ipsum is essentially dummy text used as a placeholder in design and publishing. It allows designers to focus on the layout without being distracted by the content."
With a document:
>>> import requests
>>> import blueconic
>>> from blueconic.domain.genai import Message, TextBlock, DocumentBlock
>>> bc = blueconic.Client()
>>> doc = requests.get("https://example.com/lorem_ipsum.pdf")
>>> response = bc.get_chat_completion([
>>>   Message(Message.Role.USER, [
>>>     TextBlock("Summarize this document"),
>>>     DocumentBlock(doc.content, DocumentBlock.Format.PDF, "Lorem Ipsum")
>>>   ])
>>> ])
>>> print(response.content[0].text)
"The document contains Lorem Ipsum dummy text that is commonly used in the publishing industry as a placeholder text."
With an image:
>>> import requests
>>> import blueconic
>>> from blueconic.domain.genai import Message, TextBlock, ImageBlock
>>> bc = blueconic.Client()
>>> img = requests.get("https://example.com/earth.png")
>>> response = bc.get_chat_completion([
>>>   Message(Message.Role.USER, [
>>>     TextBlock("Describe this image"),
>>>     ImageBlock(img.content, ImageBlock.Format.PNG)
>>>   ])
>>> ])
>>> print(response.content[0].text)
"The image shows a view of the Earth from space."
With a tool:
>>> import blueconic
>>> from blueconic.domain.genai import Message, Tool, ChatCompletion
>>> bc = blueconic.Client()
>>> # Implement a tool
>>> def calculator_add(a, b):
>>>   return int(a) + int(b)
>>> # Define the tool schema
>>> calculator_add_json_schema = {
>>>   "type": "object",
>>>   "properties": {
>>>     "a": {"type": "number"},
>>>     "b": {"type": "number"}
>>>   },
>>>   "required": ["a", "b"]
>>> }
>>> # Create the tool specification
>>> calculator_add_tool = Tool(
>>>   name="calculator_add",
>>>   description="Calculates the addition of two numbers",
>>>   parameters=calculator_add_json_schema
>>> )
>>> # Send the chat completion request with the tool
>>> response = bc.get_chat_completion([
>>>   Message(Message.Role.USER, "What is 2 + 2?")
>>> ], tools=[calculator_add_tool])
>>> # Extract the tool use request and execute the tool
>>> if response.stop_reason == ChatCompletion.StopReason.TOOL_USE:
>>>   tool_use_request = response.content[-1]
>>>   print(tool_use_request.name)
>>>   print(tool_use_request.tool_input)
>>>   if tool_use_request.name == "calculator_add":
>>>     result = calculator_add(**tool_use_request.tool_input)
>>>     print(result)
calculator_add
{"a": 2, "b": 2}
4
get_text_embeddings(text, model_id=None, config=None)

Gets the embeddings for the given texts. The embeddings are returned in the same order as the original texts.

Parameters:
  • text (Sequence[str]) – The texts to embed.

  • model_id (EmbeddingModel, optional) – The embedding model. If omitted, the default model is used EmbeddingModel.AMAZON_TITAN_TEXT.

  • config (dict[str, str], optional) – The config with the inference parameters.

Returns:

the text embeddings

Return type:

Sequence[Embedding]

Chat

class ChatCompletion(content, stop_reason)

Bases: object

class StopReason(value)

Bases: str, Enum

Stop reasons for stopping the chat completion.

  • END_TURN: The chat completion ended because it reached the end of a turn and successfully generated a response.

  • MAX_TOKENS: The chat completion ended because it reached the maximum number of tokens it may generate.

  • STOP_SEQUENCE: The chat completion ended because it encountered a stop sequence specified in the request.

  • TOOL_USE: The chat completion ended because it requests the use of a tool.

END_TURN = 'END_TURN'
MAX_TOKENS = 'MAX_TOKENS'
STOP_SEQUENCE = 'STOP_SEQUENCE'
TOOL_USE = 'TOOL_USE'
get_message()

Gets a Message from the chat completion. The role is always Message.Role.ASSISTANT.

Returns:

The chat message.

Return type:

Message

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘content’ and ‘stopReason’.

Return type:

dict

property content
Returns:

A sequence of content blocks.

Return type:

Sequence[ContentBlock]

property stop_reason
Returns:

The reason for stopping the chat completion.

Return type:

ChatCompletion.StopReason

class ChatInferenceConfig(max_token_count=None, stop_sequences=None, temperature=None, top_p=None, extra_inference_params=None)

Bases: object

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the inference configuration parameters.

Return type:

dict

property extra_inference_params
Returns:

The additional model-specific inference parameters.

Return type:

dict[str, str]

property max_token_count
Returns:

The maximum number of tokens to be generated.

Return type:

int

property stop_sequences
Returns:

The stop sequences that determine when the output generation should terminate.

Return type:

Sequence[str]

property temperature
Returns:

The temperature of the model to control the randomness (‘creativity’) of the output.

Return type:

float

property top_p
Returns:

The top cumulative probability limit of token candidates to consider.

Return type:

float

class ContentBlock(blockType)

Bases: object

class Type(value)

Bases: str, Enum

The content block types.

DOCUMENT = 'DOCUMENT'
IMAGE = 'IMAGE'
JSON = 'JSON'
TEXT = 'TEXT'
TOOL_RESULT = 'TOOL_RESULT'
TOOL_USE = 'TOOL_USE'
to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the key ‘type’.

Return type:

dict

property type
Returns:

The content block type.

Return type:

ContentBlock.Type

class DocumentBlock(source, docFormat, name)

Bases: ContentBlock

class Format(value)

Bases: str, Enum

The supported document formats.

CSV = 'CSV'
DOC = 'DOC'
DOCX = 'DOCX'
HTML = 'HTML'
MD = 'MD'
PDF = 'PDF'
TXT = 'TXT'
XLS = 'XLS'
XLSX = 'XLSX'
to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’, ‘format’, ‘name’, and ‘source’.

Return type:

dict

property format
Returns:

The document format.

Return type:

DocumentBlock.Format

property name
Returns:

The document name.

Return type:

str

property source
Returns:

The document as a base64-encoded string.

Return type:

str

class ImageBlock(source, imgFormat)

Bases: ContentBlock

class Format(value)

Bases: str, Enum

The supported image formats.

GIF = 'GIF'
JPEG = 'JPEG'
PNG = 'PNG'
WEBP = 'WEBP'
to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’, ‘format’, and ‘source’.

Return type:

dict

property format
Returns:

The image format.

Return type:

ImageBlock.Format

property source
Returns:

The image as a base64-encoded string.

Return type:

str

class JsonBlock(json)

Bases: ContentBlock

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary that represents the JSON content.

Return type:

dict

property json
Returns:

The JSON content.

Return type:

Mapping

class Message(role, content, use_cache=False)

Bases: object

class Role(value)

Bases: str, Enum

The message roles in a chat.

ASSISTANT = 'ASSISTANT'
USER = 'USER'
to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with keys ‘role’, ‘content’ and ‘use_cache’.

Return type:

dict

property content
Returns:

The message content.

Return type:

str | Sequence[ContentBlock]

property role
Returns:

The message role.

Return type:

Message.Role

class TextBlock(text)

Bases: ContentBlock

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’ and ‘text’.

Return type:

dict

property text
Returns:

The text content.

Return type:

str

class Tool(name, description, parameters)

Bases: object

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘name’, ‘description’, and ‘parameters’.

Return type:

dict

property description
Returns:

The description of the tool.

Return type:

str

property name
Returns:

The name of the tool.

Return type:

str

property parameters
Returns:

The tool input described as a JSON schema object.

Return type:

dict

class ToolChoice(tool_choice_type, name=None)

Bases: object

class Type(value)

Bases: str, Enum

Tool choice type determines how a model should choose a tool.

  • AUTO: Lets the model decide whether to use a tool or not.

  • ANY: Makes the model choose at least one tool to use.

  • SPECIFIC: Makes the model use a specific tool.

ANY = 'ANY'
AUTO = 'AUTO'
SPECIFIC = 'SPECIFIC'
classmethod any()

Factory method to create a tool choice with type ToolChoice.Type.ANY.

Returns:

Tool choice with type ToolChoice.Type.ANY.

Return type:

ToolChoice

classmethod auto()

Factory method to create a tool choice with type ToolChoice.Type.AUTO.

Returns:

Tool choice with type ToolChoice.Type.AUTO.

Return type:

ToolChoice

classmethod specific(name)

Factory method to create a tool choice with type ToolChoice.Type.SPECIFIC.

Parameters:

name (str) – The name of the tool to use.

Returns:

Tool choice with type ToolChoice.Type.SPECIFIC.

Return type:

ToolChoice

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’ and optionally ‘name’ if the type is ToolChoice.Type.SPECIFIC.

Return type:

dict

property name
Returns:

The name of the tool to use.

Return type:

str

property type
Returns:

The tool choice type.

Return type:

ToolChoice.Type

class ToolResultBlock(tool_use_id, result)

Bases: ContentBlock

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’, ‘tool_use_id’, and ‘result’.

Return type:

dict

property result
Returns:

The results of the tool use.

Return type:

str | Sequence[TextBlock | DocumentBlock | ImageBlock | JsonBlock]

property tool_use_id
Returns:

The tool use request ID related to the tool result.

Return type:

str

class ToolUseBlock(tool_use_id, name, tool_input)

Bases: ContentBlock

to_dict()

Converts the instance to a dictionary representation.

Returns:

A dictionary with the keys ‘type’, ‘toolUseId’, ‘name’, and ‘input’.

Return type:

dict

property name
Returns:

The tool name.

Return type:

str

property tool_input
Returns:

The tool input.

Return type:

dict

property tool_use_id
Returns:

The tool use request ID.

Return type:

str

Models

class ChatModel(value)

The supported models for chat requests.

static get_default()

Returns the default chat model.

Returns:

The default chat model.

Return type:

ChatModel

AMAZON_NOVA_PRO = 'AMAZON_NOVA_PRO'
ANTHROPIC_CLAUDE_SONNET = 'ANTHROPIC_CLAUDE_SONNET'
class EmbeddingModel(value)

The supported models for embedding requests.

AMAZON_TITAN_TEXT = 'AMAZON_TITAN_TEXT'

Embeddings

class Embedding(embedding)
to_dict()

Converts the instance to a dictionary representation. :return: A dictionary with the key ‘embedding’. :rtype: dict

property embedding
Returns:

Returns the embedding.

Return type:

Sequence[float]