Frontend Streaming Pipeline Guardrails
Quality enforcement in the frontend streaming pipeline prevents race conditions, duplicate requests, and timing issues.
Purpose
Frontend guardrails ensure:
- Clean state before each request
- No duplicate products across requests
- Accurate timing measurements
- Proper request sequencing
- Optimistic safety against concurrent submissions
State Reset & Timers
Location: app/chat/hooks/useMessageStreaming/pipeline.ts:55-140
prepareRequestState
Actions:
- Clear all errors and metrics
- Reset loading/waiting flags
- Timestamp user message (fixing ordering bugs)
- Timestamp assistant placeholder
- Start performance timers
- Clear product cards
- Reset thinking indicators
Why: Prevents stale state from previous requests bleeding into new ones
Implementation:
function prepareRequestState(): void {
// Clear errors
setError(null);
// Reset metrics
setContextExtractionTimeMs(null);
setSearchDurationMs(null);
setRerankTimeMs(null);
// Set flags
setIsLoading(true);
setIsWaitingForResponse(true);
// Clear UI
setProductCards([]);
setShowThinking(false);
// Start timers
const requestStartTime = Date.now();
setRequestStartTime(requestStartTime);
}
Input Deduplication
Location: pipeline.ts:142-200
buildPayload
Deduplication Strategy:
// 1. Track all shown products
const shownProductIds = new Set<string>();
// 2. Add to payload
payload.excludeProductIds = Array.from(shownProductIds);
// 3. Backend filters
WHERE product.id NOT IN excludeProductIds
// 4. Update on response
newProducts.forEach(p => shownProductIds.add(p.id));
Benefit: Zero duplicate products even across multiple "show more" turns
"Show More" Annotation
if (isShowMoreMessage(userMessage)) {
payload.metadata = {
...payload.metadata,
isShowMore: true, // Debug flag for backend
previousProductCount: shownProductIds.size
};
}
Why: Helps backend debugging and analytics
Optimistic Safety
Concurrent Request Prevention
const [isWaitingForResponse, setIsWaitingForResponse] = useState(false);
async function handleSubmit() {
if (isWaitingForResponse) {
console.warn('Request already in progress, blocking duplicate');
return; // Block submission
}
setIsWaitingForResponse(true);
try {
await streamMessage(input);
} finally {
setIsWaitingForResponse(false);
}
}
UI Indication:
<input
disabled={isWaitingForResponse}
placeholder={isWaitingForResponse ? 'Waiting...' : 'Type message'}
/>
Double Protection: Even if double-submitted (race condition), backend exclude filter prevents duplicates.
Latency Instrumentation
Timing Capture Points
Tracked Timings:
{
requestStartTime: number, // T0
firstEventTime: number, // T1
productsArrivedTime: number, // T2
streamCompleteTime: number, // T3
// Derived
ttfcMs: T1 - T0, // Time to First Chunk
totalDurationMs: T3 - T0 // Total request time
}
Correlation with Backend:
// Frontend observed
frontendTTFC: 750ms
// Backend reported (from perf event)
backendContextMs: 900ms
backendSearchMs: 200ms
// Correlation
frontendTTFC ≈ backendContextMs + network latency
Use Case: Identify whether slowness is in backend processing or network transfer
Deduplication Edge Cases
Case 1: Rapid "Show More" Clicks
Scenario:
User clicks "Show More" button twice quickly
Request A: excludeIds = [1,2,3]
Request B: excludeIds = [1,2,3] (same - stale!)
Protection:
// UI blocks second click
if (isWaitingForResponse) {
return; // First request must complete
}
Case 2: Products Arrive During New Request
Scenario:
Request A streaming → User submits Request B → A completes
Protection:
// Each request gets snapshot of excludeIds at build time
const excludeSnapshot = Array.from(shownProductIds);
// Later updates don't affect already-sent request
Case 3: Frontend State Out of Sync
Scenario:
Frontend: shownProductIds = [1,2,3]
Backend: storedContext.shownProductIds = [1,2,3,4,5]
Protection:
// Backend merges both sources
const merged = Array.from(new Set([
...clientExcludeIds, // [1,2,3]
...storedExcludeIds // [1,2,3,4,5]
]));
// Result: [1,2,3,4,5] - no duplicates possible
Performance Monitoring
Frame Rate Tracking
let lastFrameTime = performance.now();
let frameCount = 0;
function trackFPS() {
const now = performance.now();
const delta = now - lastFrameTime;
if (delta >= 1000) {
const fps = Math.round((frameCount * 1000) / delta);
if (fps < 60) {
console.warn('Low FPS detected:', { fps, delta });
}
frameCount = 0;
lastFrameTime = now;
}
frameCount++;
requestAnimationFrame(trackFPS);
}
Target: Maintain >60 FPS during streaming
Error States
User-Facing Error Messages
const ERROR_MESSAGES = {
NETWORK: 'Connection lost. Please check your internet.',
TIMEOUT: 'Request timed out. Server might be busy.',
DUPLICATE: 'This request was already sent.',
VALIDATION: 'Invalid input. Please check your message.'
};
Error Recovery
function handleError(error: Error): void {
// 1. Log for debugging
console.error('Pipeline error:', error);
// 2. Show user-friendly message
setError({
type: categorizeError(error),
message: ERROR_MESSAGES[type],
canRetry: isRetryable(error)
});
// 3. Reset state for retry
setIsLoading(false);
setIsWaitingForResponse(false);
// 4. Keep partial data if valuable
if (hasPartialResponse) {
// Don't clear accumulated text
}
}
Testing
Critical Test Cases
describe('Frontend Guardrails', () => {
it('blocks concurrent requests', async () => {
const promise1 = submitMessage('test 1');
const promise2 = submitMessage('test 2');
// Second should be blocked
await expect(promise2).rejects.toThrow('Request in progress');
});
it('deduplicates products', async () => {
await submitMessage('raamatud');
const products1 = getProductIds();
await submitMessage('näita rohkem');
const products2 = getProductIds();
const all = [...products1, ...products2];
const unique = new Set(all);
expect(unique.size).toBe(all.length); // No duplicates
});
it('captures accurate TTFC', async () => {
const start = Date.now();
await submitMessage('test');
const ttfc = getTTFC();
expect(ttfc).toBeGreaterThan(0);
expect(ttfc).toBeLessThan(2000);
});
});
Monitoring Metrics
{
// Request metrics
requestsInFlight: number, // Should be 0 or 1
duplicatesPrevented: number, // Concurrent blocks
stateResetCount: number, // Per request
// Timing metrics
avgTTFC: number, // Frontend observed
avgTotal: number, // Request start to done
// Error metrics
networkErrors: number,
timeouts: number,
validationErrors: number
}
Related Documentation
- Context Guardrails - Next phase
- Pipeline Overview - Pipeline architecture
- Testing & Observability - How we test