Hyperlane is a lightweight and high-performance Rust HTTP server library designed to simplify network service development. It supports HTTP request parsing, response building, TCP communication, and redirection features, making it ideal for building modern web services.
GitHub Homepage: https://github.com/eastspire/hyperlane
My fascination with real-time web applications began during a project where we needed to push live updates to thousands of connected clients simultaneously. Traditional polling approaches created excessive server load and poor user experience. My exploration of Server-Sent Events (SSE) led me to discover an implementation that revolutionizes real-time web communication.
The breakthrough came when I realized that SSE provides a simpler, more efficient alternative to WebSockets for many real-time scenarios. Unlike WebSockets, SSE works seamlessly with existing HTTP infrastructure, requires no special protocols, and provides automatic reconnection capabilities. My research revealed a framework implementation that maximizes these advantages.
Server-Sent Events enable servers to push data to web browsers over a single HTTP connection. Unlike traditional request-response patterns, SSE maintains a persistent connection that allows the server to send updates whenever new data becomes available.
The framework's SSE implementation provides exceptional performance while maintaining simplicity:
use hyperlane::*;
async fn sse_stream_handler(ctx: Context) {
// Set up SSE response headers
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_header("Cache-Control", "no-cache")
.await
.set_response_header(CONNECTION, KEEP_ALIVE)
.await
.set_response_status_code(200)
.await
.send()
.await;
// Send real-time data stream
for i in 0..100 {
let event_data = format!("data: Event {} at {}\n\n",
i,
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs());
let _ = ctx.set_response_body(event_data).await.send_body().await;
// Simulate real-time data generation
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
}
// Close the connection gracefully
let _ = ctx.closed().await;
}
async fn live_metrics_handler(ctx: Context) {
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_header("Cache-Control", "no-cache")
.await
.set_response_status_code(200)
.await
.send()
.await;
// Stream live system metrics
loop {
let metrics = collect_system_metrics().await;
let event = format!("data: {}\n\n", metrics);
if ctx.set_response_body(event).await.send_body().await.is_err() {
break; // Client disconnected
}
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
let _ = ctx.closed().await;
}
async fn collect_system_metrics() -> String {
// Simulate system metrics collection
let cpu_usage = rand::random::<f32>() * 100.0;
let memory_usage = rand::random::<f32>() * 100.0;
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
format!(r#"{{"cpu": {:.1}, "memory": {:.1}, "timestamp": {}}}"#,
cpu_usage, memory_usage, timestamp)
}
#[tokio::main]
async fn main() {
let server: Server = Server::new();
server.host("0.0.0.0").await;
server.port(60000).await;
// Optimize for SSE connections
server.enable_nodelay().await;
server.disable_linger().await;
server.http_buffer_size(4096).await;
server.route("/events", sse_stream_handler).await;
server.route("/metrics", live_metrics_handler).await;
server.run().await.unwrap();
}
The simplicity of SSE extends to client-side implementation, requiring minimal JavaScript code:
// Client-side SSE connection
const eventSource = new EventSource('/events');
eventSource.onopen = function (event) {
console.log('SSE connection opened');
};
eventSource.onmessage = function (event) {
console.log('Received data:', event.data);
updateUI(event.data);
};
eventSource.onerror = function (event) {
console.log('SSE error:', event);
// Browser automatically attempts reconnection
};
function updateUI(data) {
const container = document.getElementById('live-data');
const element = document.createElement('div');
element.textContent = data;
container.appendChild(element);
// Keep only last 50 messages
while (container.children.length > 50) {
container.removeChild(container.firstChild);
}
}
// Metrics dashboard
const metricsSource = new EventSource('/metrics');
metricsSource.onmessage = function (event) {
const metrics = JSON.parse(event.data);
updateMetricsDashboard(metrics);
};
function updateMetricsDashboard(metrics) {
document.getElementById('cpu-usage').textContent =
metrics.cpu.toFixed(1) + '%';
document.getElementById('memory-usage').textContent =
metrics.memory.toFixed(1) + '%';
document.getElementById('last-update').textContent = new Date(
metrics.timestamp * 1000
).toLocaleTimeString();
}
My benchmarking revealed exceptional SSE performance characteristics compared to alternative real-time communication methods:
SSE Performance (1000 concurrent connections):
WebSocket Comparison:
Polling Comparison:
The framework supports sophisticated SSE patterns for complex real-time applications:
async fn multi_channel_sse_handler(ctx: Context) {
let channel = ctx.get_route_param("channel").await.unwrap_or_default();
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_header("Cache-Control", "no-cache")
.await
.set_response_status_code(200)
.await
.send()
.await;
// Send channel-specific data
match channel.as_str() {
"news" => stream_news_updates(&ctx).await,
"stocks" => stream_stock_prices(&ctx).await,
"chat" => stream_chat_messages(&ctx).await,
_ => stream_general_updates(&ctx).await,
}
let _ = ctx.closed().await;
}
async fn stream_news_updates(ctx: &Context) {
for i in 0..50 {
let news_item = format!("data: {{\"type\": \"news\", \"id\": {}, \"title\": \"Breaking News {}\", \"timestamp\": {}}}\n\n",
i, i, current_timestamp());
if ctx.set_response_body(news_item).await.send_body().await.is_err() {
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
}
}
async fn stream_stock_prices(ctx: &Context) {
let stocks = ["AAPL", "GOOGL", "MSFT", "AMZN", "TSLA"];
loop {
for stock in &stocks {
let price = 100.0 + rand::random::<f32>() * 50.0;
let stock_data = format!("data: {{\"type\": \"stock\", \"symbol\": \"{}\", \"price\": {:.2}, \"timestamp\": {}}}\n\n",
stock, price, current_timestamp());
if ctx.set_response_body(stock_data).await.send_body().await.is_err() {
return;
}
}
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
async fn stream_chat_messages(ctx: &Context) {
// Simulate chat message stream
let messages = [
"Hello everyone!",
"How's the weather today?",
"Anyone working on interesting projects?",
"SSE is really cool for real-time updates",
"Much simpler than WebSockets for many use cases"
];
for (i, message) in messages.iter().enumerate() {
let chat_data = format!("data: {{\"type\": \"chat\", \"user\": \"User{}\", \"message\": \"{}\", \"timestamp\": {}}}\n\n",
i % 3 + 1, message, current_timestamp());
if ctx.set_response_body(chat_data).await.send_body().await.is_err() {
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(3)).await;
}
}
async fn stream_general_updates(ctx: &Context) {
for i in 0..20 {
let update = format!("data: {{\"type\": \"general\", \"message\": \"Update {}\", \"timestamp\": {}}}\n\n",
i, current_timestamp());
if ctx.set_response_body(update).await.send_body().await.is_err() {
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
}
fn current_timestamp() -> u64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs()
}
Robust SSE implementations require careful error handling and connection management:
async fn resilient_sse_handler(ctx: Context) {
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_header("Cache-Control", "no-cache")
.await
.set_response_header("X-Accel-Buffering", "no") // Disable nginx buffering
.await
.set_response_status_code(200)
.await
.send()
.await;
let mut retry_count = 0;
let max_retries = 3;
loop {
match generate_data_safely().await {
Ok(data) => {
let event = format!("data: {}\n\n", data);
if ctx.set_response_body(event).await.send_body().await.is_err() {
break; // Client disconnected
}
retry_count = 0; // Reset retry count on success
}
Err(e) => {
retry_count += 1;
if retry_count > max_retries {
let error_event = format!("data: {{\"error\": \"Max retries exceeded: {}\"}}\n\n", e);
let _ = ctx.set_response_body(error_event).await.send_body().await;
break;
}
// Send retry instruction to client
let retry_event = format!("retry: 5000\ndata: {{\"retry\": {}}}\n\n", retry_count);
let _ = ctx.set_response_body(retry_event).await.send_body().await;
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
}
let _ = ctx.closed().await;
}
async fn generate_data_safely() -> Result<String, Box<dyn std::error::Error>> {
// Simulate data generation that might fail
if rand::random::<f32>() < 0.1 { // 10% failure rate
return Err("Data generation failed".into());
}
Ok(format!("{{\"value\": {}, \"timestamp\": {}}}",
rand::random::<u32>(), current_timestamp()))
}
My detailed comparison revealed when to choose SSE over WebSockets:
SSE Advantages:
WebSocket Advantages:
// SSE implementation for server-to-client updates
async fn sse_dashboard_handler(ctx: Context) {
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_status_code(200)
.await
.send()
.await;
// Perfect for dashboards, notifications, live feeds
loop {
let dashboard_data = get_dashboard_data().await;
let event = format!("data: {}\n\n", dashboard_data);
if ctx.set_response_body(event).await.send_body().await.is_err() {
break;
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
let _ = ctx.closed().await;
}
async fn get_dashboard_data() -> String {
format!(r#"{{
"active_users": {},
"requests_per_second": {},
"error_rate": {:.2},
"response_time_ms": {:.1}
}}"#,
rand::random::<u32>() % 1000 + 500,
rand::random::<u32>() % 5000 + 1000,
rand::random::<f32>() * 2.0,
rand::random::<f32>() * 10.0 + 5.0)
}
SSE implementations require specific considerations for production deployment:
async fn production_sse_handler(ctx: Context) {
// Production-ready SSE headers
ctx.set_response_header(CONTENT_TYPE, TEXT_EVENT_STREAM)
.await
.set_response_header("Cache-Control", "no-cache, no-store, must-revalidate")
.await
.set_response_header("Pragma", "no-cache")
.await
.set_response_header("Expires", "0")
.await
.set_response_header("X-Accel-Buffering", "no") // Nginx
.await
.set_response_header("X-Proxy-Buffering", "no") // Other proxies
.await
.set_response_status_code(200)
.await
.send()
.await;
// Implement heartbeat to detect disconnections
let mut last_heartbeat = std::time::Instant::now();
loop {
// Send heartbeat every 30 seconds
if last_heartbeat.elapsed().as_secs() >= 30 {
let heartbeat = "data: {\"type\": \"heartbeat\"}\n\n";
if ctx.set_response_body(heartbeat).await.send_body().await.is_err() {
break;
}
last_heartbeat = std::time::Instant::now();
}
// Send actual data
if let Some(data) = get_real_time_data().await {
let event = format!("data: {}\n\n", data);
if ctx.set_response_body(event).await.send_body().await.is_err() {
break;
}
}
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
}
let _ = ctx.closed().await;
}
async fn get_real_time_data() -> Option<String> {
// Simulate real-time data availability
if rand::random::<f32>() < 0.3 { // 30% chance of new data
Some(format!("{{\"data\": \"real_time_value_{}\", \"timestamp\": {}}}",
rand::random::<u32>(), current_timestamp()))
} else {
None
}
}
My exploration of Server-Sent Events revealed that SSE provides an elegant solution for many real-time web application requirements. The framework's implementation demonstrates that SSE can deliver exceptional performance while maintaining simplicity and reliability.
The benchmark results show that SSE can efficiently handle 1000+ concurrent connections with minimal resource overhead. For applications requiring server-to-client real-time updates – dashboards, notifications, live feeds, monitoring systems – SSE offers significant advantages over more complex alternatives.
The framework's SSE implementation proves that real-time web applications don't always require complex protocols or heavy infrastructure. Sometimes the simplest solution, properly implemented, provides the best combination of performance, reliability, and maintainability.
For developers building real-time features, SSE represents a powerful tool that leverages existing HTTP infrastructure while providing the real-time capabilities that modern applications demand.
GitHub Homepage: https://github.com/eastspire/hyperlane