
Server-Sent Events (SSE) Are Underrated
December 25, 2024
Most developers know about WebSocket, but Server Sent Events (SSE) offer a simpler, often overlooked alternative that deserves more attention. Let’s explore why this technology is underrated and how it can benefit your applications.
What are server-sent events?
SSE establishes a one-way communication channel from the server to the client through HTTP. Unlike WebSocket’s two-way connection, SSE maintains an open HTTP connection for server-to-client updates. Think of it like a radio broadcast: the server (station) transmits, and the client (receiver) listens.
Why are they underestimated?
Two main factors leading to the undervaluation of the Shanghai Stock Exchange:
- Popularity of WebSocket: WebSocket’s full-duplex communication capabilities often overshadow SSE’s simpler methods
- Perceived limitations: The one-way nature may seem limiting, although it is often sufficient for many use cases
Main advantages of Shanghai Stock Exchange
1. Simple to implement
SSE utilizes the standard HTTP protocol, eliminating the complexity of WebSocket connection management.
2. Infrastructure Compatibility
SSE works seamlessly with existing HTTP infrastructure:
- load balancer
- agent
- firewall
- Standard HTTP server
3. Resource efficiency
Resource consumption is lower compared to WebSocket because:
- Unidirectionality
- Standard HTTP connection use
- No need for ongoing socket maintenance
4. Automatic reconnection
Built-in browser support:
- Connection interruption handling
- Automatic reconnection attempt
- Flexible, instant experience
5. Clear semantics
One-way communication mode is enforced:
- clear separation of concerns
- simple data flow
- Simplified application logic
Practical application
SSE excels in the following scenarios:
- Live news and community updates
- Stock quotes and financial data
- Progress bar and task monitoring
- Server log stream
- Collaborative editing (update)
- Game rankings
- location tracking system
Implementation example
Server side (Flask)
from flask import Flask, Response, stream_with_context
import time
import random
app = Flask(__name__)
def generate_random_data():
while True:
data = f"data: Random value: {random.randint(1, 100)}\n\n"
yield data
time.sleep(1)
@app.route('/stream')
def stream():
return Response(
stream_with_context(generate_random_data()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
Client (JavaScript)
const eventSource = new EventSource("/stream");
eventSource.onmessage = function(event) {
const dataDiv = document.getElementById("data");
dataDiv.innerHTML += `${event.data}`;
};
eventSource.onerror = function(error) {
console.error("SSE error:", error);
};
Code description
Server-side components:
/stream
Routing handles SSE connectionsgenerate_random_data()
Continuously generate formatting eventstext/event-stream
mimetype signal SSE protocolstream_with_context
Maintain Flask application context
Client components:
EventSource
Object Management SSE Connectiononmessage
Handler handles incoming eventsonerror
Troubleshoot connection issues- Automatic reconnection handled by browser
Limitations and Notes
When implementing SSE, be aware of the following limitations:
1. One-way communication
- Server to client only
- Requires a separate HTTP request for client-to-server communication
2. Browser support
- good support in modern browsers
- Older browsers may require padding
3. Data format
- Mainly supports text-based data
- Binary data needs to be encoded (e.g., Base64)
best practices
- Error handling
eventSource.onerror = function(error) {
if (eventSource.readyState === EventSource.CLOSED) {
console.log("Connection was closed");
}
};
- Connection management
// Clean up when done
function closeConnection() {
eventSource.close();
}
- reconnection strategy
let retryAttempts = 0;
const maxRetries = 5;
eventSource.onclose = function() {
if (retryAttempts < maxRetries) {
setTimeout(() => {
// Reconnect logic
retryAttempts++;
}, 1000 * retryAttempts);
}
};
Real world example: Implementation of ChatGPT
Modern Language Learning Models (LLM) utilize Server Sent Events (SSE) for streaming responses. Let’s explore how these implementations work and what makes them unique.
Normal mode
All major LLM providers implement streaming using a common pattern:
- return
content-type: text/event-stream
header - Stream data block separation
\r\n\r\n
- Each block contains a
data: JSON
Wire
Important tips
Although SSE is typically used with the browser’s EventSource API, LLM implementations cannot use it directly because:
- EventSource only supports GET requests
- LLM API requires POST request
OpenAI implementation
Basic request structure
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello, world?"}],
"stream": true,
"stream_options": {
"include_usage": true
}
}'
response format
Each block follows the following structure:
"data":{
"id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
"object":"chat.completion.chunk",
"created":1735161718,
"model":"gpt-4o-mini-2024-07-18",
"system_fingerprint":"fp_0aa8d3e20b",
"choices":[
{
"index":0,
"delta":{
"content":"!"
},
"logprobs":null,
"finish_reason":null
}
],
"usage":null
}
"data":{
"id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
"object":"chat.completion.chunk",
"created":1735161718,
"model":"gpt-4o-mini-2024-07-18",
"system_fingerprint":"fp_0aa8d3e20b",
"choices":[
{
"index":0,
"delta":{
},
"logprobs":null,
"finish_reason":"stop"
}
],
"usage":null
}
Key headers returned by OpenAI:
HTTP/2 200
date: Wed, 25 Dec 2024 21:21:59 GMT
content-type: text/event-stream; charset=utf-8
access-control-expose-headers: X-Request-ID
openai-organization: user-esvzealexvl5nbzmxrismbwf
openai-processing-ms: 100
openai-version: 2020-10-01
x-ratelimit-limit-requests: 10000
x-ratelimit-limit-tokens: 200000
x-ratelimit-remaining-requests: 9999
x-ratelimit-remaining-tokens: 199978
x-ratelimit-reset-requests: 8.64s
x-ratelimit-reset-tokens: 6ms
Implementation details
Streaming completed
Streaming ends at:
Usage information
The final message includes token usage:
"data":{
"id":"chatcmpl-AiT7GQk8zzYSC0Q8UT1pzyRzwxBCN",
"object":"chat.completion.chunk",
"created":1735161718,
"model":"gpt-4o-mini-2024-07-18",
"system_fingerprint":"fp_0aa8d3e20b",
"choices":[
],
"usage":{
"prompt_tokens":11,
"completion_tokens":18,
"total_tokens":29,
"prompt_tokens_details":{
"cached_tokens":0,
"audio_tokens":0
},
"completion_tokens_details":{
"reasoning_tokens":0,
"audio_tokens":0,
"accepted_prediction_tokens":0,
"rejected_prediction_tokens":0
}
}
}
in conclusion
SSE provides an elegant solution for real-time server-to-client communication. Its simplicity, efficiency, and integration with existing infrastructure make it an excellent choice for many applications. While WebSocket is still valuable for two-way communication, SSE provides a more targeted and often more appropriate solution for one-way data flow scenarios.
Want more content like this? Register my blog!
2024-12-25 21:35:55