Brun-E Guaranteed Session Report Implementation Plan
For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (
- [ ]) syntax for tracking.
Goal: Guarantee that every Brun-E session ends with a non-null final_report, regardless of how the session closes (model-initiated or user-initiated), by enriching the tool schema, enabling Whisper transcription, and adding a Chat Completions fallback report generator.
Architecture:
- The end_session tool schema is enriched with a complete, required report property so the Realtime model knows exactly what to generate and the sideband schema validator enforces it.
- Whisper transcription is enabled on the Realtime session; the frontend accumulates transcript entries from DataChannel events and forwards them to the backend /complete call.
- A new ISessionReportGenerator port + ChatCompletionsReportGeneratorAdapter generates a guaranteed-shape report via OpenAI Chat Completions (strict JSON schema) whenever the session completes without a report — covering user-initiated close and model failures.
Tech Stack: NestJS (CQRS), OpenAI Realtime API (WebRTC + DataChannel), OpenAI Chat Completions API (gpt-4o-2024-08-06, strict JSON schema), TypeScript, class-validator, AJV (sideband schema validator), Jest.
File Map
Backend (e-training-back/src/modules/brun-e/)
| File | Action | Responsibility |
|---|---|---|
infrastructure/sideband/schemas/brun_e_sideband_tool.schemas.ts |
Modify | Add full report schema to end_session, make report required |
infrastructure/adapters/brun_e_openai_tools.ts |
Modify | Update end_session tool description |
infrastructure/sideband/schemas/brun_e_sideband_schema.contract.spec.ts |
Modify | Add test: end_session without report must fail |
infrastructure/adapters/openai_realtime.adapter.ts |
Modify | Add input_audio_transcription: { model: 'whisper-1' } to session payload |
infrastructure/adapters/openai_realtime.adapter.spec.ts |
Modify | Update snapshot test to include transcription config |
application/dtos/complete_brun_e_session_request.dto.ts |
Modify | Add transcript array field |
application/commands/complete_session/complete_brun_e_session.command.ts |
Modify | Add transcript parameter |
infrastructure/http/brun_e_http.controller.ts |
Modify | Pass body.transcript to command |
application/ports/session_report_generator.port.ts |
Create | Port interface + types for report generation |
application/ports/index.ts |
Modify | Export new port |
infrastructure/adapters/chat_completions_report_generator.adapter.ts |
Create | Chat Completions adapter with strict JSON schema |
infrastructure/adapters/chat_completions_report_generator.adapter.spec.ts |
Create | Unit tests for the adapter |
application/commands/complete_session/complete_brun_e_session.handler.ts |
Modify | Inject report generator, call before transaction when report is null |
application/commands/complete_session/complete_brun_e_session.handler.spec.ts |
Modify | Add tests for report generation path |
brun_e.module.ts |
Modify | Register SESSION_REPORT_GENERATOR → ChatCompletionsReportGeneratorAdapter |
Frontend (e-training-front/)
| File | Action | Responsibility |
|---|---|---|
lib/brun-e/session-runtime.ts |
Modify | Accumulate transcript from DataChannel; send with complete(); enhance system instructions |
lib/types/brun-e-session.ts |
Modify | Add TranscriptEntry type + transcript to BrunECompleteRequestBody |
lib/brun-e/json-api-parse.ts |
Modify | Add console.warn when parseFinalReport returns null |
lib/brun-e/__tests__/json-api-parse.test.ts |
Modify | Add test: partial report without emap_projection logs warning and returns null |
Task 1: Enrich end_session tool schema + description
Files:
- Modify: src/modules/brun-e/infrastructure/sideband/schemas/brun_e_sideband_tool.schemas.ts
- Modify: src/modules/brun-e/infrastructure/adapters/brun_e_openai_tools.ts
- Test: src/modules/brun-e/infrastructure/sideband/schemas/brun_e_sideband_schema.contract.spec.ts
- [ ] Step 1.1: Write the failing test — end_session without report must fail
Add one entry to the existing it.each invalid-args array in brun_e_sideband_schema.contract.spec.ts:
it.each([
['lookup_methodology', { query: 'ab' }],
['get_user_context', { unexpected: true }],
['end_session', { reason: 'invalid-reason' }],
['end_session', { reason: 'user' }], // ← ADD: missing required report
] as Array<[BrunEToolName, unknown]>)(
'should reject invalid %s arguments',
...
)
- [ ] Step 1.2: Run test to verify it currently PASSES (meaning report is not enforced yet)
Expected: The new end_session / { reason: 'user' } test PASSES (bug confirmed — it should have failed).
- [ ] Step 1.3: Replace the entire
end_sessionentry inbrun_e_sideband_tool.schemas.ts
end_session: {
arguments: {
type: 'object',
additionalProperties: false,
required: ['reason', 'report'], // ← report is now required
properties: {
reason: {
type: 'string',
enum: ['user', 'timeout', 'disconnect', 'system'],
},
report: {
type: 'object',
additionalProperties: false,
required: [
'title',
'summary',
'emap_projection',
'analysis_raw',
'insights',
'recommendations',
'alerts',
'generalNote',
'confidence',
'meta',
],
properties: {
title: {
type: 'string',
description: 'Short session title (e.g. "Coaching Session – April 2026")',
},
summary: {
type: 'string',
description: 'Concise recap of the session (2-4 sentences)',
},
emap_projection: {
type: 'object',
additionalProperties: false,
required: ['affective', 'effective', 'perspective'],
properties: {
affective: {
type: 'object',
additionalProperties: false,
required: ['score', 'label', 'insight'],
properties: {
score: {
type: 'number',
description: 'Score 0.0 (very low) to 1.0 (very high)',
},
label: {
type: 'string',
description: '"High" (>=0.7), "Medium" (0.4–0.69), or "Low" (<0.4)',
},
insight: {
type: 'string',
description: 'One-sentence explanation for this score',
},
},
},
effective: {
type: 'object',
additionalProperties: false,
required: ['score', 'label', 'insight'],
properties: {
score: { type: 'number', description: 'Score 0.0 to 1.0' },
label: { type: 'string', description: '"High", "Medium", or "Low"' },
insight: { type: 'string', description: 'Explanation for this score' },
},
},
perspective: {
type: 'object',
additionalProperties: false,
required: ['score', 'label', 'insight'],
properties: {
score: { type: 'number', description: 'Score 0.0 to 1.0' },
label: { type: 'string', description: '"High", "Medium", or "Low"' },
insight: { type: 'string', description: 'Explanation for this score' },
},
},
},
},
analysis_raw: {
type: 'string',
description: 'Extended qualitative analysis (3-6 sentences)',
},
insights: {
type: 'array',
items: { type: 'string' },
description: 'Key takeaways from the session (2-5 items)',
},
recommendations: {
type: 'array',
items: { type: 'string' },
description: 'Actionable next steps for the user (2-5 items)',
},
alerts: {
type: 'array',
items: { type: 'string' },
description: 'Concerning patterns to flag (empty array if none)',
},
generalNote: {
type: 'string',
description: 'Overall coaching observation',
},
confidence: {
type: 'number',
description: 'Confidence in the assessment from 0.0 to 1.0',
},
meta: {
type: 'object',
additionalProperties: false,
required: ['schema_version', 'generated_at'],
properties: {
schema_version: { type: 'string', description: 'Always "2"' },
generated_at: {
type: 'string',
description: 'ISO 8601 timestamp of report generation',
},
},
},
},
},
},
},
result: {
type: 'object',
additionalProperties: false,
required: ['completed', 'already_completed'],
properties: {
completed: { type: 'boolean' },
already_completed: { type: 'boolean' },
},
},
},
- [ ] Step 1.4: Update
end_sessiondescription inbrun_e_openai_tools.ts
end_session:
'Closes the coaching session and generates the final E-MAP assessment report. ' +
'MUST be called when the conversation reaches a natural conclusion or the user asks to stop. ' +
'REQUIRED: include a complete report object with emap_projection scores (0.0–1.0 scale) ' +
'for three dimensions — affective (emotional engagement), effective (operational focus), ' +
'perspective (adaptability) — each with score, label ("High">=0.7/"Medium">=0.4/"Low"<0.4), ' +
'and insight. Also required: title, summary, analysis_raw, insights (array), ' +
'recommendations (array), alerts (array, empty if none), generalNote, ' +
'confidence (0.0–1.0), and meta with schema_version "2" and generated_at ISO timestamp. ' +
'Derive all scores from what was actually discussed in the session.',
- [ ] Step 1.5: Run contract spec to verify new test now FAILS correctly
Expected: The end_session / { reason: 'user' } test now FAILS (schema validation rejects it).
- [ ] Step 1.6: Run full contract test to verify no regressions
Expected: All tests pass — valid end_session with full report still passes; end_session with missing report or invalid reason fails.
- [ ] Step 1.7: Run tools mapping test
Expected: PASS — tools mapping is structural and doesn't depend on specific property counts.
- [ ] Step 1.8: Commit
cd e-training-back
git add src/modules/brun-e/infrastructure/sideband/schemas/brun_e_sideband_tool.schemas.ts
git add src/modules/brun-e/infrastructure/adapters/brun_e_openai_tools.ts
git add src/modules/brun-e/infrastructure/sideband/schemas/brun_e_sideband_schema.contract.spec.ts
git commit -m "feat(brun-e): enrich end_session schema with required report and detailed properties"
Task 2: Enhance injected session instructions (Frontend)
Files:
- Modify: lib/brun-e/session-runtime.ts (lines 436–468)
No unit test for this: it's a string injected into the OpenAI DataChannel — tested via integration/E2E.
- [ ] Step 2.1: Replace the
injectSessionInstructionstext insession-runtime.ts
Locate the text: field inside injectSessionInstructions (currently around line 451). Replace only the text string:
text: `MANDATORY SESSION PROTOCOL:
1. Call get_user_context IMMEDIATELY as your very first action before speaking.
2. When concluding the session, you MUST call end_session with a fully populated report object. Never say goodbye without calling end_session first.
REQUIRED report structure (every field is mandatory):
{
"title": "Brief session title (e.g. 'April Coaching Session')",
"summary": "2-4 sentence recap of what was discussed",
"emap_projection": {
"affective": { "score": 0.0-1.0, "label": "High|Medium|Low", "insight": "one sentence explanation" },
"effective": { "score": 0.0-1.0, "label": "High|Medium|Low", "insight": "one sentence explanation" },
"perspective": { "score": 0.0-1.0, "label": "High|Medium|Low", "insight": "one sentence explanation" }
},
"analysis_raw": "3-6 sentence qualitative analysis of the full session",
"insights": ["key takeaway 1", "key takeaway 2"],
"recommendations": ["concrete action 1", "concrete action 2"],
"alerts": [],
"generalNote": "overall coaching observation",
"confidence": 0.0-1.0,
"meta": { "schema_version": "2", "generated_at": "<current ISO 8601 timestamp>" }
}
Label thresholds: score >= 0.7 → "High", 0.4–0.69 → "Medium", < 0.4 → "Low".
Affective = emotional engagement and trust. Effective = operational focus and goal achievement. Perspective = ability to see multiple viewpoints and adapt.
Base all scores on the actual conversation content.`,
- [ ] Step 2.2: Commit
cd e-training-front
git add lib/brun-e/session-runtime.ts
git commit -m "feat(brun-e): enhance session instructions with full report schema template"
Task 3: Add parser logging + test (Frontend)
Files:
- Modify: lib/brun-e/json-api-parse.ts
- Modify: lib/brun-e/__tests__/json-api-parse.test.ts
- [ ] Step 3.1: Write the failing test
Add to lib/brun-e/__tests__/json-api-parse.test.ts:
describe("parseBrunESessionCompleteDocument — report parsing", () => {
it("returns null finalReport and warns when report is present but lacks emap_projection", () => {
const warnSpy = jest.spyOn(console, "warn").mockImplementation(() => {});
const raw = {
data: {
type: "brun-e-session-complete",
id: "sid-2",
attributes: {
completed: true,
already_completed: false,
final_report: {
title: "Test Session",
summary: "Some summary",
// emap_projection intentionally missing
insights: ["insight 1"],
recommendations: [],
alerts: [],
generalNote: "note",
confidence: 0.8,
},
},
},
};
const result = parseBrunESessionCompleteDocument(raw);
expect(result.finalReport).toBeNull();
expect(warnSpy).toHaveBeenCalledWith(
expect.stringContaining("parseFinalReport: discarding report"),
expect.anything(),
);
warnSpy.mockRestore();
});
it("parses a fully valid report", () => {
const raw = {
data: {
type: "brun-e-session-complete",
id: "sid-3",
attributes: {
completed: true,
already_completed: false,
final_report: {
title: "April Session",
summary: "User explored trust barriers.",
emap_projection: {
affective: { score: 0.8, label: "High", insight: "Strong engagement." },
effective: { score: 0.6, label: "Medium", insight: "Moderate focus." },
perspective: { score: 0.7, label: "High", insight: "Good adaptability." },
},
insights: ["Insight 1"],
recommendations: ["Action 1"],
alerts: [],
generalNote: "Good session.",
confidence: 0.85,
},
},
},
};
const result = parseBrunESessionCompleteDocument(raw);
expect(result.finalReport).toMatchObject({
title: "April Session",
emapProjection: {
affective: { score: 0.8, label: "High", insight: "Strong engagement." },
},
});
});
});
- [ ] Step 3.2: Run test to verify it fails
Expected: FAIL — warnSpy not called yet.
- [ ] Step 3.3: Update
parseFinalReportinlib/brun-e/json-api-parse.ts
function parseFinalReport(raw: Record<string, unknown>): FinalReport | null {
const projection = raw.emap_projection as Record<string, unknown> | undefined;
if (!isRecord(projection)) {
console.warn(
"parseFinalReport: discarding report — missing or invalid emap_projection",
{ receivedKeys: Object.keys(raw) },
);
return null;
}
const affective = projection.affective as Record<string, unknown> | undefined;
const effective = projection.effective as Record<string, unknown> | undefined;
const perspective = projection.perspective as Record<string, unknown> | undefined;
if (!isRecord(affective) || !isRecord(effective) || !isRecord(perspective)) {
console.warn(
"parseFinalReport: discarding report — emap_projection missing required dimension",
{ affective: !!affective, effective: !!effective, perspective: !!perspective },
);
return null;
}
// rest of the function unchanged
function parseDim(d: Record<string, unknown>): EmapDimension {
return {
score: typeof d.score === "number" ? d.score : 0,
label: typeof d.label === "string" ? d.label : "",
insight: typeof d.insight === "string" ? d.insight : "",
};
}
return {
title: typeof raw.title === "string" ? raw.title : "",
summary: typeof raw.summary === "string" ? raw.summary : "",
emapProjection: {
affective: parseDim(affective),
effective: parseDim(effective),
perspective: parseDim(perspective),
},
insights: Array.isArray(raw.insights)
? (raw.insights as unknown[]).filter((x): x is string => typeof x === "string")
: [],
recommendations: Array.isArray(raw.recommendations)
? (raw.recommendations as unknown[]).filter((x): x is string => typeof x === "string")
: [],
alerts: Array.isArray(raw.alerts)
? (raw.alerts as unknown[]).filter((x): x is string => typeof x === "string")
: [],
generalNote: typeof raw.generalNote === "string" ? raw.generalNote : "",
confidence: typeof raw.confidence === "number" ? raw.confidence : 0,
};
}
- [ ] Step 3.4: Run test to verify it passes
Expected: All tests pass.
- [ ] Step 3.5: Commit
cd e-training-front
git add lib/brun-e/json-api-parse.ts
git add "lib/brun-e/__tests__/json-api-parse.test.ts"
git commit -m "feat(brun-e): add diagnostic logging when report parser discards report"
Task 4: Enable Whisper transcription on Realtime session (Backend)
Files:
- Modify: src/modules/brun-e/infrastructure/adapters/openai_realtime.adapter.ts
- Modify: src/modules/brun-e/infrastructure/adapters/openai_realtime.adapter.spec.ts
- [ ] Step 4.1: Write the failing test
Add to openai_realtime.adapter.spec.ts (after the existing "should send prompt/model from runtime config snapshot" test):
it('should include input_audio_transcription with whisper-1 in session payload', async () => {
const environment = {
get: jest.fn((key: string) => {
if (key === 'OPENAI_API_KEY') return 'sk-test';
if (key === 'BRUNE_OPENAI_BASE_URL') return 'https://api.openai.com';
return undefined;
}),
} as unknown as EnvironmentService;
fetchMock.mockResolvedValue({
ok: true,
json: () =>
Promise.resolve({
client_secret: { value: 'ek_test', expires_at: 1_711_000_000 },
}),
});
const adapter = new OpenAIRealtimeAdapter(environment);
await adapter.createEphemeralKey({
promptId: 'pmpt_abc',
promptVersion: '17',
model: 'gpt-4o-realtime-preview',
});
const callBody = JSON.parse(
(fetchMock.mock.calls[0][1] as RequestInit).body as string,
) as { session: Record<string, unknown> };
expect(callBody.session.input_audio_transcription).toEqual({
model: 'whisper-1',
});
});
- [ ] Step 4.2: Run test to verify it fails
Expected: FAIL — input_audio_transcription is not in the payload yet.
- [ ] Step 4.3: Add
input_audio_transcriptionto the session payload inopenai_realtime.adapter.ts
After the line sessionPayload.prompt = { id: config.promptId, version: config.promptVersion };, add:
The full sessionPayload block should now be:
const sessionPayload: Record<string, unknown> = {
type: 'realtime',
model: config.model,
tools: BRUN_E_OPENAI_REALTIME_TOOLS,
tool_choice: 'auto',
};
sessionPayload.prompt = {
id: config.promptId,
version: config.promptVersion,
};
sessionPayload.input_audio_transcription = { model: 'whisper-1' };
- [ ] Step 4.4: Update the existing snapshot test to include
input_audio_transcription
The existing test "should send prompt/model from runtime config snapshot" checks body: JSON.stringify({session: {...}}) exactly. Update its expected body:
body: JSON.stringify({
session: {
type: 'realtime',
model: 'gpt-4o-realtime-preview',
tools: BRUN_E_OPENAI_REALTIME_TOOLS,
tool_choice: 'auto',
prompt: {
id: 'pmpt_abc',
version: '17',
},
input_audio_transcription: { model: 'whisper-1' },
},
}),
- [ ] Step 4.5: Run all adapter tests to verify all pass
Expected: All tests pass.
- [ ] Step 4.6: Commit
cd e-training-back
git add src/modules/brun-e/infrastructure/adapters/openai_realtime.adapter.ts
git add src/modules/brun-e/infrastructure/adapters/openai_realtime.adapter.spec.ts
git commit -m "feat(brun-e): enable whisper-1 input audio transcription on realtime sessions"
Task 5: Accumulate + send transcript from frontend
Files:
- Modify: lib/types/brun-e-session.ts
- Modify: lib/brun-e/session-runtime.ts
No additional test needed: session-runtime.ts is a browser runtime class (WebRTC, DataChannel) — integration-tested only.
- [ ] Step 5.1: Add
TranscriptEntrytype andtranscriptto request body inlib/types/brun-e-session.ts
Add after the BrunECompleteReason type:
Update BrunECompleteRequestBody:
export interface BrunECompleteRequestBody {
reason?: BrunECompleteReason;
transcript?: TranscriptEntry[];
}
- [ ] Step 5.2: Add
transcriptprivate field toBrunESessionRuntimeclass
In lib/brun-e/session-runtime.ts, add the import at the top:
Then inside the class body (after private remoteAudio), add:
- [ ] Step 5.3: Accumulate transcript in
dc.onmessage
In connectMediaSidebandAndRtc, inside dc.onmessage, after the existing function_call routing block (after the closing } of the if (parsed.type === "response.output_item.done" ...) block), add:
// Accumulate transcript for report generation fallback
if (
parsed.type === "response.audio_transcript.done" &&
typeof parsed.transcript === "string" &&
parsed.transcript.trim().length > 0
) {
this.transcript.push({ role: "assistant", text: parsed.transcript.trim() });
}
if (
parsed.type === "conversation.item.input_audio_transcription.completed" &&
typeof parsed.transcript === "string" &&
parsed.transcript.trim().length > 0
) {
this.transcript.push({ role: "user", text: parsed.transcript.trim() });
}
- [ ] Step 5.4: Send transcript in
finishAfterRemoteEnd()
Replace:
With:
const result = await realBrunESession.complete(sid, {
reason: "system",
transcript: this.transcript,
});
- [ ] Step 5.5: Send transcript in
end()
Replace:
With:
- [ ] Step 5.6: Clear transcript in
cleanupConnections()
At the end of cleanupConnections(), before the closing }, add:
- [ ] Step 5.7: Commit
cd e-training-front
git add lib/types/brun-e-session.ts
git add lib/brun-e/session-runtime.ts
git commit -m "feat(brun-e): accumulate whisper transcript and forward to complete endpoint"
Task 6: Accept transcript in backend complete endpoint
Files:
- Modify: src/modules/brun-e/application/dtos/complete_brun_e_session_request.dto.ts
- Modify: src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.command.ts
- Modify: src/modules/brun-e/infrastructure/http/brun_e_http.controller.ts
No new tests needed: existing command handler tests cover the path; HTTP controller is thin and tested via integration.
- [ ] Step 6.1: Add
TranscriptEntryDTOandtranscriptfield tocomplete_brun_e_session_request.dto.ts
Replace the entire file with:
import { ApiPropertyOptional } from '@nestjs/swagger';
import {
IsArray,
IsEnum,
IsIn,
IsOptional,
IsString,
ValidateNested,
} from 'class-validator';
import { Type } from 'class-transformer';
import { BrunEClosedReason } from '@BrunE/domain/enums/brun_e_closed_reason.enum';
class TranscriptEntryDTO {
@IsIn(['user', 'assistant'])
role: 'user' | 'assistant';
@IsString()
text: string;
}
export class CompleteBrunESessionRequestDTO {
@ApiPropertyOptional({
description: 'Reason for closing the session',
enum: BrunEClosedReason,
enumName: 'BrunEClosedReason',
example: BrunEClosedReason.USER,
})
@IsOptional()
@IsEnum(BrunEClosedReason)
reason?: BrunEClosedReason;
@ApiPropertyOptional({
description: 'Session transcript entries for fallback report generation',
type: 'array',
items: {
type: 'object',
properties: {
role: { type: 'string', enum: ['user', 'assistant'] },
text: { type: 'string' },
},
},
})
@IsOptional()
@IsArray()
@ValidateNested({ each: true })
@Type(() => TranscriptEntryDTO)
transcript?: TranscriptEntryDTO[];
}
- [ ] Step 6.2: Add
transcriptparameter toCompleteBrunESessionCommand
Replace the constructor in complete_brun_e_session.command.ts:
import { Command } from '@nestjs/cqrs';
import { CompleteBrunESessionDTO } from '@BrunE/application/dtos/complete_brun_e_session.dto';
import { BrunEClosedReason } from '@BrunE/domain/enums/brun_e_closed_reason.enum';
export type TranscriptEntry = { role: 'user' | 'assistant'; text: string };
export class CompleteBrunESessionCommand extends Command<CompleteBrunESessionDTO> {
constructor(
public readonly sessionId: string,
public readonly userId: string,
public readonly organizationId: string,
public readonly reason?: BrunEClosedReason,
public readonly report?: Record<string, unknown>,
public readonly transcript?: TranscriptEntry[],
) {
super();
}
}
- [ ] Step 6.3: Pass
transcriptfrom controller to command inbrun_e_http.controller.ts
In completeSession, replace:
new CompleteBrunESessionCommand(
sessionId,
context.userId,
context.organizationId,
body?.reason,
),
With:
new CompleteBrunESessionCommand(
sessionId,
context.userId,
context.organizationId,
body?.reason,
undefined,
body?.transcript,
),
- [ ] Step 6.4: Run existing command handler tests to verify no regressions
Expected: All existing tests pass (the new parameter is optional with a default of undefined).
- [ ] Step 6.5: Commit
cd e-training-back
git add src/modules/brun-e/application/dtos/complete_brun_e_session_request.dto.ts
git add src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.command.ts
git add src/modules/brun-e/infrastructure/http/brun_e_http.controller.ts
git commit -m "feat(brun-e): accept session transcript in complete endpoint for report generation"
Task 7: Create ISessionReportGenerator port + Chat Completions adapter
Files:
- Create: src/modules/brun-e/application/ports/session_report_generator.port.ts
- Modify: src/modules/brun-e/application/ports/index.ts
- Create: src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.ts
- Create: src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.spec.ts
- [ ] Step 7.1: Write the failing adapter tests
Create src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.spec.ts:
import { ChatCompletionsReportGeneratorAdapter } from './chat_completions_report_generator.adapter';
import { EnvironmentService } from '@config/environment/environment.service';
describe('ChatCompletionsReportGeneratorAdapter', () => {
const fetchMock = jest.fn();
beforeEach(() => {
fetchMock.mockReset();
globalThis.fetch = fetchMock as unknown as typeof fetch;
});
const makeEnv = (apiKey?: string, baseUrl?: string) =>
({
get: jest.fn((key: string) => {
if (key === 'OPENAI_API_KEY') return apiKey;
if (key === 'BRUNE_OPENAI_BASE_URL') return baseUrl;
return undefined;
}),
}) as unknown as EnvironmentService;
const validReport = {
title: 'April Session',
summary: 'User explored trust barriers.',
emap_projection: {
affective: { score: 0.8, label: 'High', insight: 'Strong engagement.' },
effective: { score: 0.6, label: 'Medium', insight: 'Moderate focus.' },
perspective: { score: 0.7, label: 'High', insight: 'Good adaptability.' },
},
analysis_raw: 'Detailed analysis.',
insights: ['Insight 1'],
recommendations: ['Action 1'],
alerts: [],
generalNote: 'Good session.',
confidence: 0.85,
meta: { schema_version: '2', generated_at: '2026-04-07T10:00:00.000Z' },
};
it('should call chat completions with correct payload and return parsed report', async () => {
fetchMock.mockResolvedValue({
ok: true,
json: () =>
Promise.resolve({
choices: [{ message: { content: JSON.stringify(validReport) } }],
}),
});
const adapter = new ChatCompletionsReportGeneratorAdapter(makeEnv('sk-test'));
const result = await adapter.generateReport({
transcript: [
{ role: 'assistant', text: 'Hello, how are you?' },
{ role: 'user', text: 'I feel stuck at work.' },
],
});
expect(fetchMock).toHaveBeenCalledWith(
'https://api.openai.com/v1/chat/completions',
expect.objectContaining({
method: 'POST',
headers: expect.objectContaining({ Authorization: 'Bearer sk-test' }),
}),
);
const body = JSON.parse(
(fetchMock.mock.calls[0][1] as RequestInit).body as string,
) as Record<string, unknown>;
expect(body.model).toBe('gpt-4o-2024-08-06');
expect(body.response_format).toMatchObject({ type: 'json_schema' });
expect(result).toEqual(validReport);
});
it('should use custom base URL when provided', async () => {
fetchMock.mockResolvedValue({
ok: true,
json: () =>
Promise.resolve({
choices: [{ message: { content: JSON.stringify(validReport) } }],
}),
});
const adapter = new ChatCompletionsReportGeneratorAdapter(
makeEnv('sk-test', 'https://my-proxy.example.com'),
);
await adapter.generateReport({ transcript: [] });
expect(fetchMock).toHaveBeenCalledWith(
'https://my-proxy.example.com/v1/chat/completions',
expect.any(Object),
);
});
it('should throw when OPENAI_API_KEY is missing', async () => {
const adapter = new ChatCompletionsReportGeneratorAdapter(makeEnv(undefined));
await expect(
adapter.generateReport({ transcript: [] }),
).rejects.toThrow('OPENAI_API_KEY is not configured');
});
it('should throw when OpenAI returns non-OK response', async () => {
fetchMock.mockResolvedValue({ ok: false, status: 500 });
const adapter = new ChatCompletionsReportGeneratorAdapter(makeEnv('sk-test'));
await expect(
adapter.generateReport({ transcript: [] }),
).rejects.toThrow('Chat Completions returned 500');
});
it('should throw when response has no choices', async () => {
fetchMock.mockResolvedValue({
ok: true,
json: () => Promise.resolve({ choices: [] }),
});
const adapter = new ChatCompletionsReportGeneratorAdapter(makeEnv('sk-test'));
await expect(
adapter.generateReport({ transcript: [] }),
).rejects.toThrow('Chat Completions returned empty choices');
});
it('should format transcript entries into readable text in the request', async () => {
fetchMock.mockResolvedValue({
ok: true,
json: () =>
Promise.resolve({
choices: [{ message: { content: JSON.stringify(validReport) } }],
}),
});
const adapter = new ChatCompletionsReportGeneratorAdapter(makeEnv('sk-test'));
await adapter.generateReport({
transcript: [
{ role: 'assistant', text: 'Tell me about your week.' },
{ role: 'user', text: 'It was challenging.' },
],
});
const body = JSON.parse(
(fetchMock.mock.calls[0][1] as RequestInit).body as string,
) as { messages: Array<{ role: string; content: string }> };
const userMessage = body.messages.find((m) => m.role === 'user');
expect(userMessage?.content).toContain('Coach (Brun-E): Tell me about your week.');
expect(userMessage?.content).toContain('User: It was challenging.');
});
});
- [ ] Step 7.2: Run tests to verify they fail
Expected: FAIL — module doesn't exist yet.
- [ ] Step 7.3: Create the port interface
Create src/modules/brun-e/application/ports/session_report_generator.port.ts:
export const SESSION_REPORT_GENERATOR = Symbol('SESSION_REPORT_GENERATOR');
export interface TranscriptEntry {
role: 'user' | 'assistant';
text: string;
}
export interface ReportGenerationContext {
transcript: TranscriptEntry[];
}
export interface ISessionReportGenerator {
generateReport(
context: ReportGenerationContext,
): Promise<Record<string, unknown>>;
}
- [ ] Step 7.4: Export the new port from
application/ports/index.ts
Add to src/modules/brun-e/application/ports/index.ts:
- [ ] Step 7.5: Create the Chat Completions adapter
Create src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.ts:
import { Injectable } from '@nestjs/common';
import { EnvironmentService } from '@config/environment/environment.service';
import type {
ISessionReportGenerator,
ReportGenerationContext,
} from '@BrunE/application/ports/session_report_generator.port';
const EMAP_DIM_SCHEMA = {
type: 'object',
additionalProperties: false,
required: ['score', 'label', 'insight'],
properties: {
score: { type: 'number' },
label: { type: 'string' },
insight: { type: 'string' },
},
} as const;
const FINAL_REPORT_JSON_SCHEMA = {
type: 'object',
additionalProperties: false,
required: [
'title',
'summary',
'emap_projection',
'analysis_raw',
'insights',
'recommendations',
'alerts',
'generalNote',
'confidence',
'meta',
],
properties: {
title: { type: 'string' },
summary: { type: 'string' },
emap_projection: {
type: 'object',
additionalProperties: false,
required: ['affective', 'effective', 'perspective'],
properties: {
affective: EMAP_DIM_SCHEMA,
effective: EMAP_DIM_SCHEMA,
perspective: EMAP_DIM_SCHEMA,
},
},
analysis_raw: { type: 'string' },
insights: { type: 'array', items: { type: 'string' } },
recommendations: { type: 'array', items: { type: 'string' } },
alerts: { type: 'array', items: { type: 'string' } },
generalNote: { type: 'string' },
confidence: { type: 'number' },
meta: {
type: 'object',
additionalProperties: false,
required: ['schema_version', 'generated_at'],
properties: {
schema_version: { type: 'string' },
generated_at: { type: 'string' },
},
},
},
};
const SYSTEM_PROMPT = `You are analyzing a completed Brun-E voice coaching session.
Generate a final E-MAP assessment report based strictly on the conversation transcript provided.
E-MAP framework dimensions:
- affective: emotional engagement, trust, and authenticity (0.0=absent, 1.0=very strong)
- effective: operational focus, goal-setting capacity, and concrete planning (0.0=absent, 1.0=very strong)
- perspective: ability to reframe, see multiple viewpoints, and adapt (0.0=absent, 1.0=very strong)
Labels: score >= 0.7 → "High", 0.4–0.69 → "Medium", < 0.4 → "Low"
For meta.generated_at use the current UTC ISO 8601 timestamp.
For meta.schema_version use "2".
Base all scores and insights on what was actually discussed in the conversation.
If the transcript is short or empty, still produce a complete report with lower confidence.`;
@Injectable()
export class ChatCompletionsReportGeneratorAdapter
implements ISessionReportGenerator
{
constructor(private readonly environment: EnvironmentService) {}
async generateReport(
context: ReportGenerationContext,
): Promise<Record<string, unknown>> {
const apiKey = this.environment.get<string | undefined>('OPENAI_API_KEY');
if (!apiKey) {
throw new Error('OPENAI_API_KEY is not configured');
}
const baseUrl =
this.environment.get<string | undefined>('BRUNE_OPENAI_BASE_URL') ??
'https://api.openai.com';
const userContent = this.buildTranscriptContent(context);
const response = await fetch(`${baseUrl}/v1/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: 'gpt-4o-2024-08-06',
response_format: {
type: 'json_schema',
json_schema: {
name: 'final_report',
strict: true,
schema: FINAL_REPORT_JSON_SCHEMA,
},
},
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
{ role: 'user', content: userContent },
],
}),
});
if (!response.ok) {
throw new Error(`Chat Completions returned ${response.status}`);
}
type CompletionsResponse = {
choices: Array<{ message: { content: string } }>;
};
const data = (await response.json()) as CompletionsResponse;
if (!data.choices || data.choices.length === 0) {
throw new Error('Chat Completions returned empty choices');
}
return JSON.parse(data.choices[0].message.content) as Record<
string,
unknown
>;
}
private buildTranscriptContent(context: ReportGenerationContext): string {
if (context.transcript.length === 0) {
return 'No transcript available. Generate a report with low confidence based on the coaching context.';
}
const lines: string[] = ['CONVERSATION TRANSCRIPT:', ''];
for (const entry of context.transcript) {
const speaker =
entry.role === 'assistant' ? 'Coach (Brun-E)' : 'User';
lines.push(`${speaker}: ${entry.text}`);
}
return lines.join('\n');
}
}
- [ ] Step 7.6: Run adapter tests to verify all pass
Expected: All 6 tests pass.
- [ ] Step 7.7: Commit
cd e-training-back
git add src/modules/brun-e/application/ports/session_report_generator.port.ts
git add src/modules/brun-e/application/ports/index.ts
git add src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.ts
git add src/modules/brun-e/infrastructure/adapters/chat_completions_report_generator.adapter.spec.ts
git commit -m "feat(brun-e): add ISessionReportGenerator port and ChatCompletions adapter with strict JSON schema"
Task 8: Wire report generator into CompleteBrunESessionHandler
Files:
- Modify: src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.handler.ts
- Modify: src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.handler.spec.ts
- Modify: src/modules/brun-e/brun_e.module.ts
- [ ] Step 8.1: Write the failing handler tests
Add the following tests to complete_brun_e_session.handler.spec.ts. First, add imports:
import {
SESSION_REPORT_GENERATOR,
type ISessionReportGenerator,
} from '@BrunE/application/ports/session_report_generator.port';
Then add a reportGenerator mock to the describe scope:
Update beforeEach to add the mock and register it:
reportGenerator = {
generateReport: jest.fn(),
};
const module: TestingModule = await Test.createTestingModule({
providers: [
CompleteBrunESessionHandler,
{ provide: I18nService, useValue: { t: jest.fn().mockReturnValue('msg') } },
{ provide: UnitOfWorkService, useValue: unitOfWork },
{ provide: BRUN_E_SESSION_REPOSITORY, useValue: sessionRepository },
{ provide: SESSION_COMPLETION_PUBLISHER, useValue: completionPublisher },
{ provide: SESSION_REPORT_GENERATOR, useValue: reportGenerator },
],
}).compile();
Then add new test cases:
it('should generate report via Chat Completions when transcript is provided and no sideband report', async () => {
const generatedReport = {
title: 'Generated Session',
summary: 'AI-generated summary.',
emap_projection: {
affective: { score: 0.7, label: 'High', insight: 'Engaged.' },
effective: { score: 0.6, label: 'Medium', insight: 'Focused.' },
perspective: { score: 0.8, label: 'High', insight: 'Open.' },
},
analysis_raw: 'Extended analysis.',
insights: ['Insight'],
recommendations: ['Action'],
alerts: [],
generalNote: 'Good.',
confidence: 0.75,
meta: { schema_version: '2', generated_at: '2026-04-07T10:00:00.000Z' },
};
reportGenerator.generateReport.mockResolvedValue(generatedReport);
const result = await handler.execute(
new CompleteBrunESessionCommand(
sessionId,
userId,
organizationId,
BrunEClosedReason.USER,
undefined,
[{ role: 'user', text: 'Hello' }],
),
);
expect(reportGenerator.generateReport).toHaveBeenCalledWith({
transcript: [{ role: 'user', text: 'Hello' }],
});
expect(result).toMatchObject({
completed: true,
already_completed: false,
final_report: generatedReport,
});
});
it('should complete session without report when Chat Completions throws', async () => {
reportGenerator.generateReport.mockRejectedValue(new Error('API error'));
const result = await handler.execute(
new CompleteBrunESessionCommand(
sessionId,
userId,
organizationId,
BrunEClosedReason.USER,
undefined,
[{ role: 'user', text: 'Hello' }],
),
);
expect(result).toMatchObject({
completed: true,
already_completed: false,
final_report: null,
});
expect(sessionRepository.save).toHaveBeenCalledTimes(1);
expect(completionPublisher.publishCompletion).toHaveBeenCalledTimes(1);
});
it('should not call report generator when sideband already provided a report', async () => {
const sidebandReport = { title: 'From sideband', emap_projection: {} };
await handler.execute(
new CompleteBrunESessionCommand(
sessionId,
userId,
organizationId,
BrunEClosedReason.USER,
sidebandReport,
[{ role: 'user', text: 'Hello' }],
),
);
expect(reportGenerator.generateReport).not.toHaveBeenCalled();
});
it('should not call report generator when no transcript is provided', async () => {
await handler.execute(
new CompleteBrunESessionCommand(sessionId, userId, organizationId),
);
expect(reportGenerator.generateReport).not.toHaveBeenCalled();
});
- [ ] Step 8.2: Run handler tests to verify new tests fail
Expected: The 4 new tests FAIL (handler doesn't inject generator yet).
- [ ] Step 8.3: Update
complete_brun_e_session.handler.tsto inject and use the report generator
Replace the entire file:
import { CommandHandler, ICommandHandler } from '@nestjs/cqrs';
import { Inject } from '@nestjs/common';
import { I18nService } from 'nestjs-i18n';
import { UnitOfWorkService } from '@config/db/unit_of_work.service';
import { CompleteBrunESessionCommand } from './complete_brun_e_session.command';
import { CompleteBrunESessionDTO } from '@BrunE/application/dtos/complete_brun_e_session.dto';
import { BRUN_E_SESSION_REPOSITORY } from '@BrunE/infrastructure/db/brun_e_session.repository';
import type { IBrunESessionRepository } from '@BrunE/infrastructure/db/brun_e_session.repository';
import { SESSION_COMPLETION_PUBLISHER } from '@BrunE/application/ports/session_completion_publisher.port';
import type { ISessionCompletionPublisher } from '@BrunE/application/ports/session_completion_publisher.port';
import { SESSION_REPORT_GENERATOR } from '@BrunE/application/ports/session_report_generator.port';
import type { ISessionReportGenerator } from '@BrunE/application/ports/session_report_generator.port';
import { BrunESessionNotFoundException } from '@BrunE/infrastructure/exceptions/brun_e_session_not_found.exception';
import { BrunEForbiddenSessionOwnerException } from '@BrunE/infrastructure/exceptions/brun_e_forbidden_session_owner.exception';
import { BrunEClosedReason } from '@BrunE/domain/enums/brun_e_closed_reason.enum';
import { BrunESessionMapper } from '@BrunE/application/mappers/brun_e_session.mapper';
import { brunEMetrics } from '@BrunE/infrastructure/services/brun_e_metrics.registry';
@CommandHandler(CompleteBrunESessionCommand)
export class CompleteBrunESessionHandler
implements ICommandHandler<CompleteBrunESessionCommand, CompleteBrunESessionDTO>
{
constructor(
private readonly i18n: I18nService,
private readonly unitOfWork: UnitOfWorkService,
@Inject(BRUN_E_SESSION_REPOSITORY)
private readonly sessionRepository: IBrunESessionRepository,
@Inject(SESSION_COMPLETION_PUBLISHER)
private readonly completionPublisher: ISessionCompletionPublisher,
@Inject(SESSION_REPORT_GENERATOR)
private readonly reportGenerator: ISessionReportGenerator,
) {}
async execute(
command: CompleteBrunESessionCommand,
): Promise<CompleteBrunESessionDTO> {
const { sessionId, userId, organizationId, reason, report, transcript } =
command;
// Generate report via Chat Completions before the DB transaction.
// Only attempted when no sideband report was provided and a transcript exists.
let effectiveReport = report;
if (!effectiveReport && transcript && transcript.length > 0) {
try {
effectiveReport = await this.reportGenerator.generateReport({
transcript,
});
} catch {
// best-effort — session still completes, report stays null
}
}
return this.unitOfWork.execute(async () => {
const session = await this.sessionRepository.findByIdForUpdate(sessionId);
if (!session) {
brunEMetrics.increment('brune_complete_session_total', {
status: 'not_found',
});
throw new BrunESessionNotFoundException(this.i18n);
}
if (
session.getUserId().toString() !== userId ||
session.getOrganizationId().toString() !== organizationId
) {
brunEMetrics.increment('brune_complete_session_total', {
status: 'forbidden',
});
throw new BrunEForbiddenSessionOwnerException(this.i18n);
}
if (session.isTerminal()) {
brunEMetrics.increment('brune_complete_session_total', {
status: 'already_completed',
});
return BrunESessionMapper.toCompleteDTO(session, true);
}
session.complete(reason ?? BrunEClosedReason.USER, effectiveReport);
await this.sessionRepository.save(session);
await this.completionPublisher.publishCompletion(session);
brunEMetrics.increment('brune_complete_session_total', {
status: 'completed',
});
return BrunESessionMapper.toCompleteDTO(session, false);
});
}
}
- [ ] Step 8.4: Run handler tests to verify all pass
Expected: All tests pass (4 existing + 4 new).
- [ ] Step 8.5: Register the new adapter in
brun_e.module.ts
Add the import at the top:
import { SESSION_REPORT_GENERATOR } from './application/ports/session_report_generator.port';
import { ChatCompletionsReportGeneratorAdapter } from './infrastructure/adapters/chat_completions_report_generator.adapter';
Add to the providers array (alongside the other provide/useClass registrations):
- [ ] Step 8.6: Run the full Brun-E test suite to verify no regressions
Expected: All tests pass.
- [ ] Step 8.7: Commit
cd e-training-back
git add src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.handler.ts
git add src/modules/brun-e/application/commands/complete_session/complete_brun_e_session.handler.spec.ts
git add src/modules/brun-e/brun_e.module.ts
git commit -m "feat(brun-e): wire Chat Completions report generator as fallback in complete session handler"
Final Verification
- [ ] Run full backend test suite
Expected: All tests pass, no regressions.
- [ ] Run frontend tests
Expected: All tests pass.
Self-Review
Spec Coverage
| Root Cause | Task |
|---|---|
RC-1: end_session schema empty + report optional |
Task 1 |
| RC-2: Realtime API no strict mode | Mitigated by Task 1 (detailed schema guides the model) + Task 7 (Chat Completions fallback with strict mode) |
| RC-3: System instructions vague | Task 2 |
| RC-4: Parser silently discards partial reports | Task 3 |
| RC-5: User-initiated close has no report path | Tasks 4–8 (transcript + Chat Completions fallback) |
Type Consistency
TranscriptEntryis defined insession_report_generator.port.ts(backend) andbrun-e-session.ts(frontend). Both use{ role: 'user' | 'assistant'; text: string }— structurally identical, intentionally kept separate to avoid cross-repo coupling.CompleteBrunESessionCommand6th parametertranscriptisTranscriptEntry[]— imported fromcomplete_brun_e_session.command.ts, used consistently by controller (passesbody?.transcript) and tests.ChatCompletionsReportGeneratorAdapterimplementsISessionReportGenerator— the port symbolSESSION_REPORT_GENERATORis registered in module and injected in handler via the same symbol.
No Placeholders
All steps contain complete code. No TBDs.