Last June, I wrote “The Quiet Power of Server-Sent Events for Real-Time Apps” where I sang the praises of SSE as a lightweight alternative to WebSockets for many real-time scenarios.
While the simplicity of SSE remains one of its greatest strengths, that simplicity can be deceptive. A naive implementation might work perfectly in development but crumble under the harsh realities of spotty connections, aggressive proxies, and browser quirks. Let’s dive into the techniques that transform fragile SSE connections into resilient data pipelines.
Setting the Foundation with Proper Headers
Every robust SSE implementation begins with proper HTTP headers. They’re critical instructions to browsers and intermediaries about how to handle your connection.
// Express.js example
app.get('/events', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
// Prevents Nginx from buffering the response
res.setHeader('X-Accel-Buffering', 'no');
// Additional cross-origin permissions if needed
res.setHeader('Access-Control-Allow-Origin', '*');
// ... rest of your SSE implementation
});
Or if you’re using Hono with Bun (which has gained significant traction over the past year):
import { Hono } from 'hono'
const app = new Hono()
app.get('/events', (c) => {
return c.streamSSE(async (stream) => {
// streamSSE automatically sets the proper Content-Type header
// The rest of your event stream logic goes here
})
})
The Content-Type: text/event-stream header is absolutely mandatory, it’s what tells the browser to treat this as an SSE connection. But equally important is Cache-Control: no-cache to prevent any intermediary from caching your dynamic content, and Connection: keep-alive to maintain the connection over time.
That X-Accel-Buffering: no header is a lifesaver when running behind Nginx, which is notorious for buffering responses unless explicitly told not to.
Resilient Message Delivery with Event IDs
The real magic of robust SSE implementations comes from proper use of event IDs. Each event you send should include a unique, sequential identifier:
let eventId = 0;
function sendEvent(res, data, eventName = 'message') {
res.write(`id: ${eventId}\n`);
res.write(`event: ${eventName}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
eventId++;
}
When a client reconnects after a disconnection, browsers automatically send the ID of the last event they received in the Last-Event-ID header. This is your opportunity to resume the stream exactly where the client left off:
// Server implementation with reconnection support
app.get('/events', (req, res) => {
// Set headers as shown earlier
const lastEventId = parseInt(req.headers['last-event-id'] || '0', 10);
// Retrieve missed events
const missedEvents = eventStore.getEventsSince(lastEventId);
// Send missed events first
missedEvents.forEach(event => {
sendEvent(res, event.data, event.type);
});
// Continue with regular event stream
// ...
});
For this to work, you need some way to store recent events. A simple approach is using a fixed-size circular buffer:
class EventStore {
constructor(capacity = 100) {
this.events = [];
this.capacity = capacity;
}
addEvent(event) {
// Add event with auto-incrementing ID
const id = this.events.length > 0
? this.events[this.events.length - 1].id + 1
: 1;
this.events.push({ ...event, id });
// Trim if we exceed capacity
if (this.events.length > this.capacity) {
this.events.shift();
}
return id;
}
getEventsSince(id) {
return this.events.filter(event => event.id > id);
}
}
const eventStore = new EventStore();
This approach ensures that clients don’t miss messages during brief disconnections. The fixed-size buffer prevents memory leaks while still accommodating reasonable reconnection windows.
Keeping Connections Alive
One of SSE’s most common failure points is intermediaries like proxies and load balancers silently closing seemingly idle connections. The solution? Regular heartbeats:
function startEventStream(req, res) {
// Set headers...
// Set up heartbeat
const heartbeatInterval = setInterval(() => {
res.write(': keepalive\n\n');
}, 15000); // 15 seconds is a good balance
// Clean up on client disconnect
req.on('close', () => {
clearInterval(heartbeatInterval);
});
// Rest of your SSE logic...
}
These comment lines (starting with :) are ignored by SSE clients but keep the connection active. Fifteen seconds is usually sufficient, but you may need to adjust based on your infrastructure. AWS Application Load Balancers, for instance, have a default idle timeout of 60 seconds, so a 30-second heartbeat would be appropriate there.
Graceful Error Handling
Both client and server need sophisticated error handling for truly robust SSE. On the server side, proper cleanup is critical:
app.get('/events', (req, res) => {
const clientId = req.query.clientId;
// Validate client authorization
if (!isAuthorized(clientId)) {
// Don't use SSE response format for errors
return res.status(403).json({ error: 'Unauthorized' });
}
// Set up connection
const clientConnection = registerClient(clientId, res);
// Clean up on client disconnect
req.on('close', () => {
unregisterClient(clientId);
});
// Handle server shutdown gracefully
req.on('end', () => {
unregisterClient(clientId);
});
});
On the client side, default browser reconnection behavior is often too simplistic. A more robust approach implements exponential backoff with jitter:
class RobustEventSource {
constructor(url, options = {}) {
this.url = url;
this.options = options;
this.eventSource = null;
this.reconnectAttempt = 0;
this.maxReconnectTime = options.maxReconnectTime || 30000;
this.listeners = {};
this.connect();
}
connect() {
this.eventSource = new EventSource(this.url);
this.eventSource.onopen = (event) => {
console.log('Connection established');
this.reconnectAttempt = 0;
};
this.eventSource.onerror = (event) => {
this.handleError(event);
};
// Forward all events to listeners
Object.entries(this.listeners).forEach(([type, handlers]) => {
handlers.forEach(handler => {
this.eventSource.addEventListener(type, handler);
});
});
}
handleError(event) {
if (this.eventSource.readyState === EventSource.CLOSED) {
// Connection closed, implement backoff with jitter
const baseTime = Math.min(
this.maxReconnectTime,
1000 * Math.pow(2, this.reconnectAttempt)
);
// Add jitter (±30% of baseTime)
const jitter = baseTime * 0.3 * (Math.random() * 2 - 1);
const reconnectTime = baseTime + jitter;
console.log(`Connection lost. Reconnecting in ${Math.round(reconnectTime / 1000)}s`);
setTimeout(() => {
this.reconnectAttempt++;
this.connect();
}, reconnectTime);
}
}
addEventListener(type, handler) {
if (!this.listeners[type]) {
this.listeners[type] = [];
}
this.listeners[type].push(handler);
if (this.eventSource) {
this.eventSource.addEventListener(type, handler);
}
}
close() {
if (this.eventSource) {
this.eventSource.close();
this.eventSource = null;
}
}
}
// Usage
const events = new RobustEventSource('/events', {
maxReconnectTime: 60000
});
events.addEventListener('message', (event) => {
console.log('Received:', JSON.parse(event.data));
});
events.addEventListener('special', (event) => {
console.log('Special event:', JSON.parse(event.data));
});
This implementation goes beyond the browser’s native reconnection by adding exponential backoff with jitte. It’s a strategy that prevents thundering herds when services recover from outages. The jitter ensures that multiple clients don’t all attempt to reconnect simultaneously.
Scaling Considerations
Browser connection limits can become a bottleneck as your application scales. Most browsers limit connections per domain to around 6-8 connections. With HTTP/2 this is less of an issue, but if you’re supporting older clients or if HTTP/2 isn’t an option, consider multiplexing multiple logical streams over a single SSE connection:
// Server: Multiplex multiple event types
function broadcastToUser(userId, eventType, data) {
const userConnection = activeConnections[userId];
if (userConnection) {
sendEvent(userConnection, {
type: eventType,
payload: data
}, 'multiplex');
}
}
// Client: Demultiplex events
events.addEventListener('multiplex', (event) => {
const { type, payload } = JSON.parse(event.data);
// Route to appropriate handler
switch (type) {
case 'notification':
handleNotification(payload);
break;
case 'chat':
handleChatMessage(payload);
break;
case 'status':
handleStatusUpdate(payload);
break;
}
});
This pattern lets you send notifications, chat messages, and status updates all over a single connection, dramatically reducing the connection overhead.
A Real-World Integration Example
Let’s put it all together with a practical example. Here’s how you might implement a robust SSE system for streaming AI-generated responses (a common use case these days):
// Server (using Express and an AI client library)
import express from 'express';
import { AIClient } from 'some-ai-library';
const app = express();
const aiClient = new AIClient({ apiKey: process.env.AI_API_KEY });
const eventStore = new EventStore(50); // Store last 50 events
app.get('/ai-stream/:conversationId', async (req, res) => {
const { conversationId } = req.params;
const lastEventId = parseInt(req.headers['last-event-id'] || '0', 10);
// Set SSE headers
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.setHeader('X-Accel-Buffering', 'no');
// Heartbeat interval
const heartbeat = setInterval(() => {
res.write(': keepalive\n\n');
}, 15000);
// Clean up on close
req.on('close', () => {
clearInterval(heartbeat);
});
try {
// Send any missed events
const missedEvents = eventStore.getEventsSince(lastEventId);
for (const event of missedEvents) {
sendEvent(res, event.data, event.type, event.id);
}
// Get the conversation context
const conversation = await getConversation(conversationId);
// Stream the AI response
const stream = await aiClient.generateStreamingResponse(conversation.messages);
let currentId = lastEventId;
for await (const chunk of stream) {
currentId = eventStore.addEvent({
type: 'chunk',
data: { text: chunk.text }
});
sendEvent(res, { text: chunk.text }, 'chunk', currentId);
}
// Mark completion
currentId = eventStore.addEvent({
type: 'done',
data: { conversationId }
});
sendEvent(res, { conversationId }, 'done', currentId);
} catch (error) {
console.error('Error streaming AI response:', error);
// Send error event
const errorId = eventStore.addEvent({
type: 'error',
data: { message: 'Failed to generate response' }
});
sendEvent(res, { message: 'Failed to generate response' }, 'error', errorId);
// End the stream
clearInterval(heartbeat);
res.end();
}
});
function sendEvent(res, data, event, id) {
res.write(`id: ${id}\n`);
res.write(`event: ${event}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
}
app.listen(3000, () => {
console.log('Server running on port 3000');
});
And the corresponding client implementation:
class AIStreamClient {
constructor(conversationId) {
this.conversationId = conversationId;
this.responseText = '';
this.eventSource = null;
this.onChunkCallbacks = [];
this.onCompleteCallbacks = [];
this.onErrorCallbacks = [];
}
startStream() {
this.responseText = '';
this.eventSource = new EventSource(`/ai-stream/${this.conversationId}`);
this.eventSource.addEventListener('chunk', (event) => {
const { text } = JSON.parse(event.data);
this.responseText += text;
this.onChunkCallbacks.forEach(callback =>
callback(text, this.responseText)
);
});
this.eventSource.addEventListener('done', () => {
this.eventSource.close();
this.onCompleteCallbacks.forEach(callback =>
callback(this.responseText)
);
});
this.eventSource.addEventListener('error', (event) => {
const data = event.data ? JSON.parse(event.data) : { message: 'Unknown error' };
this.onErrorCallbacks.forEach(callback =>
callback(data.message)
);
this.eventSource.close();
});
// Handle connection errors
this.eventSource.onerror = () => {
// If we're not already closed, try to reconnect
// (browser will handle this automatically with Last-Event-ID)
if (this.eventSource.readyState === EventSource.CLOSED) {
console.log('Connection closed, browser will attempt to reconnect...');
}
};
}
onChunk(callback) {
this.onChunkCallbacks.push(callback);
return this;
}
onComplete(callback) {
this.onCompleteCallbacks.push(callback);
return this;
}
onError(callback) {
this.onErrorCallbacks.push(callback);
return this;
}
stop() {
if (this.eventSource) {
this.eventSource.close();
this.eventSource = null;
}
}
}
// Usage
const streamClient = new AIStreamClient('conversation-123')
.onChunk((chunk, fullText) => {
document.getElementById('response').textContent = fullText;
})
.onComplete((fullResponse) => {
console.log('AI response complete:', fullResponse);
})
.onError((message) => {
console.error('Stream error:', message);
});
streamClient.startStream();
This implementation handles all the patterns we’ve discussed: proper headers, event IDs for resuming interrupted streams, heartbeats to keep connections alive, and comprehensive error handling. It’s particularly well-suited for streaming AI-generated content, which has become one of the most popular use cases for SSE in recent months.
Final Thoughts
Server-Sent Events remain one of the most underappreciated tools for real-time web functionality. Their simplicity compared to WebSockets is a genuine strength, but that simplicity must be paired with careful implementation to create truly robust systems.
The patterns covered here: proper headers, event IDs, heartbeats, error handling, and scaling considerations. They represent hard-earned lessons from real-world implementations. Apply them consistently, and you’ll build SSE-based systems that can withstand the unpredictable nature of production environments.
SSE isn’t always the right choice. If you need bidirectional communication, WebSockets may still be your best bet. But for server-to-client streaming, a well-implemented SSE solution offers an elegant, standards-based approach with excellent browser support and minimal overhead.