Skip to content
Get started

Create a grounded response

client.responses.create(ResponseCreateParams { input, effort, model, 7 more } body, RequestOptionsoptions?): ResponseCreateResponse
POST/v1/responses

Runs the grounded search pipeline (Plan → Collect → Process → Analyze) and returns a cited answer in the OpenAI Responses API shape.

Non-streaming (default). Omit stream (or pass stream: false) to receive a single JSON body equal to the response field of the terminal response.completed event. This matches OpenAI's spec.

Streaming. Pass stream: true to receive a Server-Sent Events stream of OpenAI Responses events: response.createdresponse.in_progressresponse.output_item.addedresponse.output_text.delta* → response.output_item.doneresponse.completed. The official OpenAI SDKs consume this stream natively.

Drop-in mode. Pass vendor_events: false (recommended for SDK consumers) to suppress non-OpenAI events: response.pipeline_stage, response.source_texts, and the web_search_call output item. Leave it true if you want to surface pipeline progress (planning, collection, processing) in your UI.

Citations. Every output_text content part carries a url_citation annotation for each inline [N] marker — see the UrlCitationAnnotation schema and the API overview above for code that expands them into footnotes or tooltips.

Follow-up turns. Pass request_id from a prior response to continue the same conversation; input is appended to the existing history.

ParametersExpand Collapse
body: ResponseCreateParams { input, effort, model, 7 more }
input: string | Array<UnionMember1>

User message string or array of input items.

One of the following:
string
Array<UnionMember1>
content?: string | unknown | null
One of the following:
string
unknown
role?: string
snippets?: Array<Snippet>
One of the following:
UnionMember0 { source_index, text, type, 3 more }
source_index: number
text: string

The relevant passage extracted from the source.

type: "text"
verified: boolean

Whether the snippet text was verified as a substring of the source document.

num?: number

Sequential number assigned to verified snippets (1-based). Unverified snippets have no num.

source_offset?: number | null

Character offset of the snippet within the source document. Used for ordering snippets by position.

UnionMember1 { end_index, num, source_index, 2 more }
end_index: number

End character offset into source.full_text (exclusive).

num: number

Sequential number assigned to this snippet (1-based). Always present for index snippets.

source_index: number
start_index: number

Start character offset into source.full_text.

type: "index"
UnionMember2 { caption, context, image_url, 9 more }
caption: string

The image caption or alt text copied from the source.

context: string

Nearby explanatory context copied verbatim from the source.

image_url: string

The image URL extracted from the source.

source_index: number
type: "image"
verified: boolean

Whether both the caption and context were verified against the source document.

caption_end_index?: number | null

End character offset of the verified caption in source.full_text (exclusive).

caption_start_index?: number | null

Start character offset of the verified caption in source.full_text.

content_length?: number

Image file size in bytes, if the reachability probe returned a Content-Length (or Content-Range total).

minimum0
context_end_index?: number | null

End character offset of the verified context in source.full_text (exclusive).

context_start_index?: number | null

Start character offset of the verified context in source.full_text.

num?: number

Sequential number assigned to verified snippets (1-based). Unverified snippets have no num.

sources?: Array<CollectedSourceSummary>
One of the following:
UnionMember0 { author, favicon, image, 14 more }
author: string | null
favicon: string | null
image: string | null
published_date: string | null
query_index: number
query_indices: Array<number>
result_index: number
score: number | null
source_type: "web"

Source came from web search.

title: string
url: string
domain_credibility?: DomainCredibility | null

Domain credibility assessment (1-10 score with label). Separate from LLM-assigned quality score.

label: string
score: number
minimum1
maximum10
explanation?: string | null

What this source is, why it is relevant, and why it got its quality score.

meta?: Array<UnionMember0 { kind, model, api_ms, 13 more } | UnionMember1 { kind, model, api_ms, 12 more } >

Per-source extraction attempts for admin debug view. One entry per LLM call.

One of the following:
UnionMember0 { kind, model, api_ms, 13 more }
kind: "text"
model: string
api_ms?: number
cache_read_input_tokens?: number
error?: string
explanation?: string
failed_verification_count?: number
failed_verification_samples?: Array<FailedVerificationSample>
reason: string
text: string
score?: number
input_tokens?: number
n_snippets?: number
nonempty_snippet_count?: number
output_tokens?: number
quality?: number
raw_snippet_count?: number
skipped_for_limit_count?: number
verified_snippet_count?: number
UnionMember1 { kind, model, api_ms, 12 more }
kind: "image_context"
model: string
api_ms?: number
cache_read_input_tokens?: number
error?: string
failed_verification_count?: number
failed_verification_samples?: Array<FailedVerificationSample>
reason: string
text: string
score?: number
fetch_ms?: number
fetched_content_type?: string
input_tokens?: number
markitdown_ms?: number
n_snippets?: number
output_tokens?: number
raw_image_context_count?: number
verified_image_context_count?: number
model?: string | null

Model that produced the final snippets for this source.

providers?: Array<string>

Provider names that returned this URL (admin-only).

quality?: number | null

Source quality score 0-10 (10 = best). 10 = current primary source (e.g. SEC filing, enacted statute, binding court opinion). 5-6 = secondary/pending (e.g. pending bill, law firm alert). 0 = irrelevant or unreliable (no snippets extracted).

minimum0
maximum10
UnionMember1 { artifact_id, author, content_type, 17 more }
artifact_id: string

Artifact ID for uploaded file sources.

author: string | null
content_type: string

Content type for uploaded file sources.

favicon: string | null
filename: string

Original filename for uploaded file sources.

image: string | null
published_date: string | null
query_index: number
query_indices: Array<number>
result_index: number
score: number | null
source_type: "file"

Source came from an uploaded file.

title: string
url: string
domain_credibility?: DomainCredibility | null

Domain credibility assessment (1-10 score with label). Separate from LLM-assigned quality score.

label: string
score: number
minimum1
maximum10
explanation?: string | null

What this source is, why it is relevant, and why it got its quality score.

meta?: Array<UnionMember0 { kind, model, api_ms, 13 more } | UnionMember1 { kind, model, api_ms, 12 more } >

Per-source extraction attempts for admin debug view. One entry per LLM call.

One of the following:
UnionMember0 { kind, model, api_ms, 13 more }
kind: "text"
model: string
api_ms?: number
cache_read_input_tokens?: number
error?: string
explanation?: string
failed_verification_count?: number
failed_verification_samples?: Array<FailedVerificationSample>
reason: string
text: string
score?: number
input_tokens?: number
n_snippets?: number
nonempty_snippet_count?: number
output_tokens?: number
quality?: number
raw_snippet_count?: number
skipped_for_limit_count?: number
verified_snippet_count?: number
UnionMember1 { kind, model, api_ms, 12 more }
kind: "image_context"
model: string
api_ms?: number
cache_read_input_tokens?: number
error?: string
failed_verification_count?: number
failed_verification_samples?: Array<FailedVerificationSample>
reason: string
text: string
score?: number
fetch_ms?: number
fetched_content_type?: string
input_tokens?: number
markitdown_ms?: number
n_snippets?: number
output_tokens?: number
raw_image_context_count?: number
verified_image_context_count?: number
model?: string | null

Model that produced the final snippets for this source.

providers?: Array<string>

Provider names that returned this URL (admin-only).

quality?: number | null

Source quality score 0-10 (10 = best). 10 = current primary source (e.g. SEC filing, enacted statute, binding court opinion). 5-6 = secondary/pending (e.g. pending bill, law firm alert). 0 = irrelevant or unreliable (no snippets extracted).

minimum0
maximum10
effort?: "low" | "medium" | "high" | "max"

Anthropic API effort level. Defaults to high.

One of the following:
"low"
"medium"
"high"
"max"
model?: string

Model ID, e.g. "grounded-sonnet" or "grounded-opus".

params?: Params

Pipeline parameters. Merged with the selected preset defaults, or deep defaults when preset is custom or omitted.

analysisMaxWords?: number
analysisTimeoutMs?: number
category?: string
effort?: "low" | "medium" | "high" | "max"
One of the following:
"low"
"medium"
"high"
"max"
excludeDomains?: Array<string>
excludeText?: Array<string>
imageContextMaxSources?: number
includeDomains?: Array<string>
includeText?: Array<string>
maxSnippetsPerFile?: number
maxSnippetsPerWebSource?: number
numResults?: number
perSourceExtractTimeoutMs?: number
planMaxQueries?: number
planTimeoutMs?: number
startPublishedDate?: string

ISO date (YYYY-MM-DD)

preset?: "xfast" | "fast" | "deep" | 3 more

Preset name. Use "custom" when request params do not exactly match a preset.

One of the following:
"xfast"
"fast"
"deep"
"max"
"auto"
"custom"
project_id?: string

Project ID for a new request. Ignored for follow-up turns.

formatuuid
request_id?: string

Request ID for follow-up turns. Reuses the same request instead of creating a new one.

formatuuid
sources?: Sources

Uploaded-file source options.

file_ids?: Array<string>

Uploaded grounded file IDs to use as sources.

web_policy?: "off" | "web_plus_files"

When files are attached, either use only files or augment them with web search.

One of the following:
"off"
"web_plus_files"
stream?: boolean

When true, the response is delivered as a Server-Sent Events stream. When false (default, matching OpenAI's spec), the bridge buffers events server-side and returns the terminal response.completed payload as JSON.

vendor_events?: boolean

When false, omit non-OpenAI vendor events (response.pipeline_stage, response.source_texts) and the web_search_call output item so the SSE stream parses cleanly with the official OpenAI SDK.

ReturnsExpand Collapse
ResponseCreateResponse = unknown

Terminal response.completed payload (returned only when stream: false).

Create a grounded response

import CementedAI from 'cemented.ai';

const client = new CementedAI({
  apiKey: process.env['CEMENTED_AI_API_KEY'], // This is the default and can be omitted
});

const response = await client.responses.create({
  input: 'What did NVIDIA report in its latest 10-Q?',
  model: 'grounded-sonnet',
});

console.log(response);
{}
Returns Examples
{}