Skip to main content
Using a Skills-compatible agent like Claude Code? The Greenflash agent skill handles this automatically. Run /greenflash-onboard-prompts instead of copying this prompt.

Upgrade an Existing TypeScript SDK Integration to Log System Prompts

You are a coding agent tasked with updating our existing TypeScript SDK integration with the Greenflash Messages API so that it correctly logs system prompts for prompt optimization & management. We already have a working integration but are not currently submitting any system prompts alongside messages. Your goal is to enhance our integration so that Greenflash can automatically version, analyze, and optimize system prompts based on real conversation data.

High-Level Objective

Refactor or extend the existing implementation to:
  1. Include system prompts when logging conversations
  2. Support both simple string prompts and structured promptComponents
  3. Ensure dynamic and variable content in complex prompts is logged properly
  4. Use the correct TypeScript SDK syntax
  5. Determine the right logging pattern (batch vs individual calls) based on how the existing code is written
This is not a correction of our current setup; it is an evolution toward richer analytic and optimization tooling.

Logging System Prompts: Two Valid Formats

Greenflash supports two ways to include a system prompt when logging a conversation. You must choose between them based on the complexity of the prompt being submitted.

Option A: Simple String Prompt

Use this when the system prompt is a single static string.
// Fire-and-forget (recommended for logging)
client.messages.create({
  externalUserId: userId,
  externalConversationId: convoId,
  productId: productId,
  systemPrompt: "You are a helpful assistant. Be concise and friendly.",
  messages: [ ... ],
}).catch(err => console.error('Greenflash error:', err));

// Or with await if you need to wait for the response
await client.messages.create({
  externalUserId: userId,
  externalConversationId: convoId,
  productId: productId,
  systemPrompt: "You are a helpful assistant. Be concise and friendly.",
  messages: [ ... ],
});
Pros:
  • Easy to implement
  • Automatically gets tracked & versioned
  • Good for most use cases
Cons:
  • Cannot express structured or dynamic prompts
  • Less granular optimization
Use this when prompts are simple, static strings.

Option B: Structured Prompt with promptComponents

Use this when the prompt includes multiple pieces, variables, dynamic slots, templates, or RAG context elements. A structured prompt lets you segment instructions into named components, version them independently, and optimize at the component level.
const systemPrompt = {
  externalTemplateId: "my-assistant-prompt",
  components: [
    {
      type: "system",
      name: "base_instructions",
      content: "You are a customer support assistant for {{company}}."
    },
    {
      type: "system",
      name: "tone_guidelines",
      content: "Always be professional and empathetic."
    },
    {
      type: "rag",
      name: "context",
      content: dynamicRagText,
      isDynamic: true
    }
  ],
  variables: {
    customerName: customerName,
    company: companyName
  }
};

// Fire-and-forget (recommended for logging)
client.messages.create({
  externalUserId: userId,
  externalConversationId: convoId,
  productId: productId,
  systemPrompt: systemPrompt,
  messages: [ ... ],
}).catch(err => console.error('Greenflash error:', err));
Key Notes for Structured Prompts:
  • externalTemplateId groups all versions of this prompt under a single lineage. Use a stable, human-readable identifier (e.g., "customer-support-agent"). Without this, each unique prompt creates a separate lineage.
  • Valid component type values: system, user, tool, guardrail, rag, agent, other. Each component can also have a source: customer (default), participant, greenflash, or agent.
  • Use isDynamic: true for parts of the prompt that vary per conversation (e.g., retrieved context, dynamic instructions)
  • Use variables on the systemPrompt object to pass template variable values (e.g., {{companyName}})
Advantages:
  • Modular prompt structure
  • Component-level optimization
  • Variable interpolation
  • Dynamic content slots for RAG or agent outputs
Use this when prompts are complex, have templates, or contain dynamic data.

Fire-and-Forget Logging

For logging, you typically don’t want to block your LLM response while waiting for the Greenflash API call to complete. Use fire-and-forget by not awaiting the promise:
// Fire-and-forget: don't await, just add .catch() to handle errors
client.messages.create({
  externalUserId: userId,
  externalConversationId: convoId,
  productId: productId,
  systemPrompt: systemPrompt,
  messages: messageList,
}).catch(err => console.error('Greenflash logging error:', err));
Note: The .catch() is recommended to prevent unhandled promise rejections from crashing your app, but the logging call runs in the background without blocking your response.

Detecting the Right Pattern

Inspect the existing codebase and choose the pattern that minimally disrupts current logic:
  • If prompts are static strings and there is no variable logic or dynamic content, use Simple String Format
  • If prompts are assembled from pieces, templates, or dynamic slots, use Structured Prompt with Components
  • If code emits prompt text dynamically as it runs, wrap that logic into structured components with isDynamic: true where needed
Once the pattern is chosen, update the logging logic accordingly.

Logging Messages with Prompts

After adding the system prompt, the rest of the messages follow the existing logging pattern:
  • You may send all messages in a batch if they are known at the end of execution
  • Or you may log messages incrementally as they happen (e.g., streaming)
  • Use client.messages.create(...) for each call when logging incrementally
  • For batch mode, send a single call with systemPrompt and message array
Either approach is valid — choose what aligns best with the existing code flow.

TypeScript SDK Integration Checklist

Your code should:
  1. Initialize the TypeScript SDK with our API key
  2. Construct systemPrompt:
    • As a simple string, or
    • As a structured object with promptComponents
  3. Include systemPrompt on every messages.create(...) call
  4. Respect dynamic variables using variables on the systemPrompt object if structured prompts have templates
  5. Choose batching vs streaming appropriately
  6. Preserve existing logging semantics for message bodies
  7. Use fire-and-forget pattern (no await) with .catch() to avoid blocking LLM responses
Example with variables (fire-and-forget):
const systemPromptWithVars = {
  externalTemplateId: "my-assistant-prompt",
  components: [/* your prompt components */],
  variables: {
    userName: userName,
    companyName: companyName
  }
};

client.messages.create({
  externalUserId: userId,
  externalConversationId: conversationId,
  productId: productId,
  systemPrompt: systemPromptWithVars,
  messages: messageList,
}).catch(err => console.error('Greenflash error:', err));
Example with variables (awaited):
const systemPromptWithVars = {
  externalTemplateId: "my-assistant-prompt",
  components: [/* your prompt components */],
  variables: {
    userName: userName,
    companyName: companyName
  }
};

await client.messages.create({
  externalUserId: userId,
  externalConversationId: conversationId,
  productId: productId,
  systemPrompt: systemPromptWithVars,
  messages: messageList,
});

Tips

  • String prompts are great to start with and will get metrics and analytics immediately
  • Structured prompts unlock component-level optimization and versioning
  • Dynamic components (isDynamic: true) allow RAG and other runtime context to be logged correctly
  • Interpolated variables let you keep templates generic while capturing personalized conversation data

Summary of What to Deliver

  • Updated TypeScript SDK integration that logs system prompts
  • Support for both simple string prompts and structured prompt components
  • Logic to choose the right pattern
  • Ensure dynamic data and variables are logged properly
  • No disruption to existing message logging behavior
This will unlock Greenflash’s prompt optimization capabilities across our AI products.