A high-performance, production-ready WebSocket server written in Rust, designed for real-time applications across gaming, finance, chat, and other domains requiring fast, secure communication.
- High-Performance WebSocket Server - Built with Tokio and Warp for maximum throughput
- Room-Based Communication - Organize users into channels/rooms with fine-grained permissions
- JWT Authentication - Secure token-based authentication with role-based access control (RBAC)
- Rate Limiting - Prevent abuse with configurable per-user and global rate limits
- Message Validation - XSS protection, spam detection, and content filtering
- Thread-Safe Architecture - Concurrent message handling with race condition protection
- Production-Ready Security - Comprehensive vulnerability protection
- Role-Based Permissions - Owner, Admin, Moderator, Member, and Guest roles
- Ban/Kick/Mute System - Complete moderation toolkit
- Connection Limiting - Prevent DoS attacks with IP-based connection limits
- Input Validation - Protect against injection and malformed data
- Configurable Thread Pool - Optimize performance for your hardware
- Memory Protection - Built-in safeguards against memory exhaustion
- Async Broadcasting - Efficient message distribution to large user groups
- Resource Cleanup - Automatic cleanup of idle connections and expired data
The server is built on the following components:
- Authentication: JWT token management with role-based access control
- Core: Room management, session handling, connection processing, and thread pooling
- Handlers: WebSocket and HTTP request processing with authentication
- Storage: Simple in-memory message persistence with room isolation
- Configuration: Dynamic server settings through environment variables
- Rust (1.63.0 or newer)
- Cargo package manager
Clone the repository and build the project:
git clone https://github.com/egdavid/rusty-socks.git
cd rusty-socks
cargo build --releaseRusty Socks can be configured using environment variables:
| Variable | Description | Default |
|---|---|---|
| RUSTY_SOCKS_HOST | Server host address | 0.0.0.0 |
| RUSTY_SOCKS_PORT | Server port | 3030 |
| RUSTY_SOCKS_MAX_CONN | Maximum connections | 100 |
| RUSTY_SOCKS_BUFFER | Message buffer size | 1024 |
| RUSTY_SOCKS_TIMEOUT | Connection timeout in seconds | 60 |
| RUSTY_SOCKS_PING | Ping interval in seconds | 30 |
| RUSTY_SOCKS_THREAD_POOL_SIZE | Number of worker threads in the pool | 4 |
| RUSTY_SOCKS_MAX_QUEUED_TASKS | Maximum number of tasks that can be queued | 1000 |
| RUSTY_SOCKS_JWT_SECRET | Secret key for JWT token signing | "your-secret-key" |
cargo run --bin rusty_socksOr with custom configuration:
RUSTY_SOCKS_PORT=8080 RUSTY_SOCKS_THREAD_POOL_SIZE=8 cargo run --bin rusty_socksWebSocket endpoint is available at:
ws://[host]:[port]/ws
Health check endpoint:
http://[host]:[port]/health
Thread pool statistics endpoint:
http://[host]:[port]/stats
Rusty Socks uses a thread pool to efficiently manage multiple concurrent WebSocket connections. When the server is under heavy load:
- New WebSocket connections are queued if all worker threads are busy
- If the connection queue reaches its maximum capacity (
RUSTY_SOCKS_MAX_QUEUED_TASKS), new connection attempts will be rejected - Rejected clients will experience a connection failure
- Existing connections remain unaffected and continue to function normally
Important for client implementations:
- Implement connection retry logic with exponential backoff
- Add appropriate error handling for connection failures
- Consider monitoring connection rejection rates in production environments
This connection rejection mechanism is a deliberate design choice to maintain server stability and responsiveness for existing connections during peak loads, rather than risking degraded performance for all users.
Rusty Socks uses JWT tokens for authentication. To connect to the WebSocket server:
- Obtain a JWT token (implement your own authentication endpoint)
- Include the token in the WebSocket connection URL as a query parameter:
ws://localhost:3030/ws?token=your_jwt_token_here
// Assuming you have a JWT token from your auth system
const token = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...';
const socket = new WebSocket(`ws://localhost:3030/ws?token=${token}`);
socket.onopen = function() {
console.log('Connected to Rusty Socks server');
// Join a room
const joinMessage = {
type: 'join_room',
room_id: 'general',
password: null // optional for password-protected rooms
};
socket.send(JSON.stringify(joinMessage));
// Send a message to the room
const message = {
type: 'room_message',
room_id: 'general',
content: 'Hello from JS client',
timestamp: new Date().toISOString()
};
socket.send(JSON.stringify(message));
};
socket.onmessage = function(event) {
const message = JSON.parse(event.data);
console.log('Received:', message);
// Handle different message types
switch(message.type) {
case 'room_message':
console.log(`[${message.room_id}] ${message.sender}: ${message.content}`);
break;
case 'user_joined':
console.log(`${message.username} joined ${message.room_id}`);
break;
case 'error':
console.error('Server error:', message.message);
break;
}
};
socket.onclose = function() {
console.log('Connection closed');
};
socket.onerror = function(error) {
console.error('WebSocket error:', error);
// Implement exponential backoff retry here
};Users with appropriate permissions can manage rooms:
// Create a new room (requires ManageRoom permission)
const createRoom = {
type: 'create_room',
name: 'My Private Room',
is_private: true,
max_members: 50
};
socket.send(JSON.stringify(createRoom));
// Set user role (requires ManageRoles permission)
const setRole = {
type: 'set_user_role',
room_id: 'general',
user_id: 'target_user_id',
role: 'Moderator'
};
socket.send(JSON.stringify(setRole));
// Ban user (requires BanUsers permission)
const banUser = {
type: 'ban_user',
room_id: 'general',
user_id: 'target_user_id',
duration_hours: 24 // optional, null for permanent
};
socket.send(JSON.stringify(banUser));Rusty Socks is designed for high performance with a configurable thread pool that:
- Distributes connection handling across multiple worker threads
- Controls maximum task queue size to prevent server overload
- Provides monitoring through the
/statsendpoint - Efficiently utilizes multi-core processors
To optimize performance, adjust the thread pool settings based on your hardware:
# For a machine with 8 cores
RUSTY_SOCKS_THREAD_POOL_SIZE=8 RUSTY_SOCKS_MAX_QUEUED_TASKS=2000 cargo run --bin rusty_socksRun the integration tests:
cargo testRun specific tests:
cargo test --test websocket_testYou can use wscat to manually test the WebSocket server functionality. This is particularly useful for debugging and verifying real-time message exchange.
Install wscat using npm:
npm install -g wscatConnect to the WebSocket server:
wscat -c ws://localhost:3030/ws-
Start the server:
cargo run --bin rusty_socks
-
Connect with a client:
wscat -c ws://localhost:3030/ws
-
After connecting, you should receive a welcome message with your client ID.
-
Send a test message (should be properly formatted JSON):
{"id":"00000000-0000-0000-0000-000000000000","sender":"test_user","content":"Hello from wscat!","timestamp":"2025-03-15T12:00:00Z"} -
Any response from the server will be displayed in the terminal.
For testing broadcast functionality, open multiple terminal sessions with wscat connections and observe how messages are distributed among clients.
Connect with verbose output for debugging:
wscat -c ws://localhost:3030/ws --verboseConnect to a custom port:
wscat -c ws://localhost:8080/wsThe server's thread pool allows it to handle multiple concurrent connections efficiently. To test the server under load:
-
Install a load testing tool like
artilleryorvegeta -
Run the load test against the WebSocket endpoint
-
Monitor the server's thread pool stats during the test:
curl http://localhost:3030/stats
-
To simulate connection rejection scenarios:
# Run with a small thread pool and queue size RUSTY_SOCKS_THREAD_POOL_SIZE=2 RUSTY_SOCKS_MAX_QUEUED_TASKS=10 cargo run --bin rusty_socks # Then send many simultaneous connection requests # Observe which ones are accepted and which are rejected
Contributions are welcome! Please feel free to submit a Pull Request 🙃
This project is licensed under the MIT License - see the LICENSE file for details.