Skip to main content
Language models generate text responses, decide when to call tools, and process conversation context. They handle all text generation and reasoning for your agent. Models vary in speed (response latency), capability (reasoning quality and accuracy), context length (how much conversation history they can process), and features (vision support, function calling, structured output). Hypertic supports 10+ language model providers, allowing you to switch between OpenAI, Anthropic, Google, xAI, and others seamlessly.

Supported Providers

Click on a provider card to see setup and usage examples:

Basic Usage

Here are examples for different providers:
from hypertic.agents import Agent
from hypertic.models import OpenAI

model = OpenAI(model="gpt-5.2")
agent = Agent(
    model=model,
    tools=[get_weather],
)

response = agent.run("What's the weather in SF?")

Parameters

Configure your model with these common parameters:
model
string
required
The model ID used to generate responses, like “gpt-4” or “claude-sonnet-4-5-20250929”.
temperature
number
Sampling temperature. Higher values make output more random, lower values make it more deterministic.
max_tokens
number
Upper bound for the number of tokens that can be generated for a response, including visible output tokens.
api_key
string
Your API key for authenticating with the provider. Usually set via environment variables.
Example configuration:
from hypertic.models import OpenAI

model = OpenAI(
    model="gpt-4",
    temperature=0.7,
    max_tokens=1000,
)
Each provider may have additional parameters. Check your provider’s documentation for details.