| title | emoji | colorFrom | colorTo | sdk | pinned | app_port | base_path | tags | |
|---|---|---|---|---|---|---|---|---|---|
Martens Environment Server |
π₯ |
blue |
purple |
docker |
false |
8000 |
/web |
|
A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
The simplest way to use the Martens environment is through the MartensEnv class:
from martens import MartensAction, MartensEnv
try:
# Create environment from Docker image
martensenv = MartensEnv.from_docker_image("martens-env:latest")
# Reset
result = martensenv.reset()
print(f"Reset: {result.observation.echoed_message}")
# Send multiple messages
messages = ["Hello, World!", "Testing echo", "Final message"]
for msg in messages:
result = martensenv.step(MartensAction(message=msg))
print(f"Sent: '{msg}'")
print(f" β Echoed: '{result.observation.echoed_message}'")
print(f" β Length: {result.observation.message_length}")
print(f" β Reward: {result.reward}")
finally:
# Always clean up
martensenv.close()That's it! The MartensEnv.from_docker_image() method handles:
- Starting the Docker container
- Waiting for the server to be ready
- Connecting to the environment
- Container cleanup when you call
close()
Before using the environment, you need to build the Docker image:
# From project root
docker build -t martens-env:latest -f server/Dockerfile .You can easily deploy your OpenEnv environment to Hugging Face Spaces using the openenv push command:
# From the environment directory (where openenv.yaml is located)
openenv push
# Or specify options
openenv push --namespace my-org --privateThe openenv push command will:
- Validate that the directory is an OpenEnv environment (checks for
openenv.yaml) - Prepare a custom build for Hugging Face Docker space (enables web interface)
- Upload to Hugging Face (ensuring you're logged in)
- Authenticate with Hugging Face: The command will prompt for login if not already authenticated
--directory,-d: Directory containing the OpenEnv environment (defaults to current directory)--repo-id,-r: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)--base-image,-b: Base Docker image to use (overrides Dockerfile FROM)--private: Deploy the space as private (default: public)
# Push to your personal namespace (defaults to username/env-name from openenv.yaml)
openenv push
# Push to a specific repository
openenv push --repo-id my-org/my-env
# Push with a custom base image
openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
# Push as a private space
openenv push --private
# Combine options
openenv push --repo-id my-org/my-env --base-image custom-base:latest --privateAfter deployment, your space will be available at:
https://huggingface.co/spaces/<repo-id>
The deployed space includes:
- Web Interface at
/web- Interactive UI for exploring the environment - API Documentation at
/docs- Full OpenAPI/Swagger interface - Health Check at
/health- Container health monitoring - WebSocket at
/ws- Persistent session endpoint for low-latency interactions
MartensAction: Contains a single field
message(str) - The message to echo back
MartensObservation: Contains the echo response and metadata
echoed_message(str) - The message echoed backmessage_length(int) - Length of the messagereward(float) - Reward based on message length (length Γ 0.1)done(bool) - Always False for echo environmentmetadata(dict) - Additional info like step count
The reward is calculated as: message_length Γ 0.1
- "Hi" β reward: 0.2
- "Hello, World!" β reward: 1.3
- Empty message β reward: 0.0
If you already have a Martens environment server running, you can connect directly:
from martens import MartensEnv
# Connect to existing server
martensenv = MartensEnv(base_url="<ENV_HTTP_URL_HERE>")
# Use as normal
result = martensenv.reset()
result = martensenv.step(MartensAction(message="Hello!"))Note: When connecting to an existing server, martensenv.close() will NOT stop the server.
The client supports context manager usage for automatic connection management:
from martens import MartensAction, MartensEnv
# Connect with context manager (auto-connects and closes)
with MartensEnv(base_url="http://localhost:8000") as env:
result = env.reset()
print(f"Reset: {result.observation.echoed_message}")
# Multiple steps with low latency
for msg in ["Hello", "World", "!"]:
result = env.step(MartensAction(message=msg))
print(f"Echoed: {result.observation.echoed_message}")The client uses WebSocket connections for:
- Lower latency: No HTTP connection overhead per request
- Persistent session: Server maintains your environment state
- Efficient for episodes: Better for many sequential steps
The server supports multiple concurrent WebSocket connections. To enable this,
modify server/app.py to use factory mode:
# In server/app.py - use factory mode for concurrent sessions
app = create_app(
MartensEnvironment, # Pass class, not instance
MartensAction,
MartensObservation,
max_concurrent_envs=4, # Allow 4 concurrent sessions
)Then multiple clients can connect simultaneously:
from martens import MartensAction, MartensEnv
from concurrent.futures import ThreadPoolExecutor
def run_episode(client_id: int):
with MartensEnv(base_url="http://localhost:8000") as env:
result = env.reset()
for i in range(10):
result = env.step(MartensAction(message=f"Client {client_id}, step {i}"))
return client_id, result.observation.message_length
# Run 4 episodes concurrently
with ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(run_episode, range(4)))Test the environment logic directly without starting the HTTP server:
# From the server directory
python3 server/martens_environment.pyThis verifies that:
- Environment resets correctly
- Step executes actions properly
- State tracking works
- Rewards are calculated correctly
Run the server locally for development:
uvicorn server.app:app --reloadmartens/
βββ .dockerignore # Docker build exclusions
βββ __init__.py # Module exports
βββ README.md # This file
βββ openenv.yaml # OpenEnv manifest
βββ pyproject.toml # Project metadata and dependencies
βββ uv.lock # Locked dependencies (generated)
βββ client.py # MartensEnv client
βββ models.py # Action and Observation models
βββ server/
βββ __init__.py # Server module exports
βββ martens_environment.py # Core environment logic
βββ app.py # FastAPI application (HTTP + WebSocket endpoints)
βββ Dockerfile # Container image definition