Skip to main content
/images/icon.png
Applicable to the Express and Enterprise Editions only.
Prophecy’s AI capabilities, including Copilot and Agents, are powered by external LLMs.
  • The SaaS deployment uses a Prophecy-managed OpenAI subscription with GPT-4.1 and GPT-4.1 mini.
  • Dedicated SaaS deployments connect to customer-managed endpoints.
In customer-managed deployments, you configure providers and models in Copilot Settings. Prophecy uses two AI configuration patterns:
  • Agents use only the transform_agent configuration in AI Model Provider Creds to determine how Claude is accessed by Agents.
  • Copilot uses provider credentials and model selection to power features like expression generation.
These configurations serve different features, are configured independently, and live in the same settings.
  1. Log in to Prophecy as an Administrator.
  2. Go to Settings > Admin > Copilot Settings.
Copilot Settings includes three subtabs: All settings must be provided in YAML format and override values in your Kubernetes deployment.

How Copilot and Agents use these settings

Agents use transform_agent in AI Model Providers Creds Copilot uses: These configurations are independent and can be used together.

Configure AI Model Providers Creds (Agent and Copilot)

In the AI Model Providers Creds subtab, you provide the credentials to connect to your LLM provider. You can configure both Copilot providers and an Agent provider in the same credentials block. (It is common to use both.) The following YAML example shows the required fields for different LLM providers.
{
  "transform_agent": { "anthropic_key": "********"},
  'azure_openai': { 'api_key': '********', 'api_endpoint': '********' },
  'openai': { 'api_key': '********' },
  'gemini': { 'api_key': '********' },
  'vertex_ai': {},
}
The transform_agent block is used only by Agents to configure access to Claude. Copilot does not use this configuration.For details on configuring endpoints for the Agents, see Configure Claude providers. Other providers in this section are used only by Copilot when selecting models. You can add multiple credentials here, but Copilot will only connect to the models defined in the Available AI Models subtab. Agents use the transform_agent configuration instead.
When editing credentials in Prophecy, you will not be able to see previously-entered values. Likewise, once you save your credentials, values will be masked by asterisks **** as shown in the example above.

Set up Vertex AI

Vertex AI does not require an API key. Instead, you provide a Google Cloud service account file. Reach out to the Support team with the following information to add Vertex AI credentials to your deployment:
  • The service account file that can authenticate your connection to Vertex AI.
  • (Optional) The Vertex AI region, if it differs from the service account region.
  • (Optional) The Vertex AI project, if it differs from the service account project.

Available AI Models for Copilot

This configuration applies only to Copilot features. Agents do not use these models. In the Available AI Models subtab, you define two models:
  • Smart model: Prophecy uses this model for complex tasks. Recommended Model: gpt-4.1
  • Fast model: Prophecy uses this model for easy and quick tasks. Recommended Model: gpt-4.1-mini
You can select models from any of the providers you configured in the AI Model Providers Creds subtab. The following YAML example shows how to format your smart and fast model configuration.
{
  'smart_model': { 'provider': 'openai', 'model_name': 'gpt-4.1' },
  'fast_model': { 'provider': 'openai', 'model_name': 'gpt-4.1-mini' },
}

Available AI Speech Models

This configuration applies only to Copilot speech features. In the Available AI Speech Models subtab, you define different models for speech-to-text and text-to-speech operations. You can select models from any of the providers you configured in the AI Model Providers Creds subtab. The following YAML example shows how to format your speech-to-text (stt) and text-to-speech (tts) model configuration.
{
  'stt': { 'provider': 'openai', 'model_name': 'whisper-1' },
  'tts': { 'provider': 'openai', 'model_name': 'tts-1' },
}

Prerequisites for configuring Copilot providers

These prerequisites apply only to Copilot features. To add your LLM provider and model details to Copilot settings:
  • You must be logged in as a Prophecy cluster admin.
  • Copilot must already be enabled in your Prophecy deployment. Reach out to the Support team to enable Copilot.