Lydlr is an AI-powered compression system designed to optimize storage and transmission of multimodal sensor data in real time. It processes data streams from cameras, LiDAR, IMU, and audio sensors by encoding and fusing them into a compact latent representation using convolutional and recurrent neural networks. The system leverages temporal context through LSTM layers to improve compression efficiency by learning patterns over time. A reinforcement learning-based controller dynamically adjusts compression levels based on system conditions such as CPU load, battery status, and network bandwidth, ensuring an optimal balance between data quality and resource usage. Additionally, a real-time quality assessment module uses perceptual metrics (LPIPS) to monitor reconstruction fidelity, enabling adaptive tuning on the fly. Synthetic sensor data streams simulate diverse environments for thorough testing and development. The entire pipeline is designed for deployment on edge devices like Raspberry Pi or NVIDIA Jetson, with model quantization and export capabilities for efficient execution on constrained hardware.
Lydlr addresses critical challenges in modern sensor data processing across multiple industries:
Compress sensor data from cameras, LiDAR, and IMU sensors before transmission to cloud infrastructure. This reduces bandwidth requirements by up to 90% while maintaining critical information for real-time decision-making and post-processing analysis. Enables efficient data offloading from vehicles to central processing systems without overwhelming network infrastructure.
Reduce bandwidth consumption for real-time video and LiDAR streaming during flight operations. Critical for long-range missions where maintaining communication links is essential. Allows operators to receive high-quality sensor feeds even over limited bandwidth connections, enabling extended operational range and improved mission success rates.
Optimize storage for long-duration data collection in research and industrial applications. Robots can operate for extended periods without storage limitations, capturing comprehensive sensor data for analysis, training, and system improvement. Essential for autonomous systems that need to learn from extended operational periods.
Enable AI processing on devices with limited bandwidth and computational resources. By compressing multimodal sensor data at the edge, systems can reduce transmission costs, improve response times, and enable real-time decision-making without constant cloud connectivity. Critical for applications requiring low latency and privacy-preserving local processing.
Compress sensor data from distributed IoT networks for efficient transmission to central monitoring systems. Reduces network congestion, extends battery life of edge devices, and enables cost-effective scaling of sensor networks. Essential for smart cities, industrial monitoring, and environmental sensing applications where thousands of devices transmit data continuously.
Lydlr's adaptive compression technology delivers measurable improvements across key performance metrics:
- Bandwidth Reduction: Achieves 80-95% reduction in data transmission requirements while maintaining perceptual quality
- Storage Optimization: Enables 5-10x longer data collection periods with the same storage capacity
- Real-Time Processing: Processes multimodal sensor streams at 30+ FPS on edge devices with minimal latency
- Resource Efficiency: Reduces CPU and memory usage by 40-60% compared to traditional compression methods
- Quality Preservation: Maintains reconstruction fidelity with LPIPS scores above 0.85 for critical sensor data
- Adaptive Performance: Dynamically adjusts compression based on system conditions, ensuring optimal operation across varying network and computational constraints
The system's ability to learn temporal patterns and adapt compression levels in real-time makes it particularly effective for applications requiring both high efficiency and quality preservation.
Target: macOS + Docker + ROS 2 Humble + Python venv
- Docker installed
- Homebrew + Python 3 installed
- Basic understanding of terminal and ROS 2
docker pull osrf/ros:humble-desktopBuild with Xvfb (GUI headless display):
docker build -t ros2_xvfb .Run container with volume and full setup:
docker run -it \
--name ros2_ai \
-v ~/Documents/lydlr:/root/lydlr \
ros2_xvfb \
bash -c "export PYTHONPATH=\$PYTHONPATH:/root/lydlr/lydlr_ws/src:/root/lydlr/lydlr_ws/.venv/lib/python3.10/site-packages && \
export XDG_RUNTIME_DIR=/tmp/runtime-root && \
mkdir -p \$XDG_RUNTIME_DIR && chmod 700 \$XDG_RUNTIME_DIR && \
Xvfb :99 -screen 0 1024x768x24 & export DISPLAY=:99 && \
source /opt/ros/humble/setup.bash && \
cd /root/lydlr/lydlr_ws && exec bash"Alternative (no Xvfb, uses host display):
docker run -it \
--name ros2_ai \
-e DISPLAY=host.docker.internal:0 \
-v ~/Documents/lydlr:/root/lydlr \
osrf/ros:humble-desktopRe-enter running container:
docker exec -it ros2_ai bash -c "cd /root/lydlr/lydlr_ws && bash"Restart and attach:
docker start -ai ros2_aiStop and remove container:
docker stop ros2_ai
docker rm ros2_aiCreate venv (run once):
cd ~/lydlr/lydlr_ws
python3 -m venv .venvActivate venv:
source ~/lydlr/lydlr_ws/.venv/bin/activateSource ROS 2 environment:
source /opt/ros/humble/setup.bashCreate workspace:
mkdir -p /root/lydlr/lydlr_ws/src
cd /root/lydlr/lydlr_wsCreate package:
cd src
ros2 pkg create --build-type ament_python \
--dependencies rclpy std_msgs sensor_msgs \
-- lydlr_aiBuild (after creating or editing any package/node):
cd /root/lydlr/lydlr_ws
colcon build --symlink-install --packages-select lydlr_aiSource after build:
source install/setup.bashReactivate venv (if needed):
source .venv/bin/activateAdd entry to setup.py:
entry_points={
'console_scripts': [
'your_node_name = lydlr_ai.your_node_file_name:main',
],
}Rebuild and source:
colcon build --symlink-install --packages-select lydlr_ai
source install/setup.bashEnvironment setup before running:
export PYTHONPATH=$PYTHONPATH:/root/lydlr/lydlr_ws/src:/root/lydlr/lydlr_ws/.venv/lib/python3.10/site-packages
export XDG_RUNTIME_DIR=/tmp/runtime-root
mkdir -p $XDG_RUNTIME_DIR && chmod 700 $XDG_RUNTIME_DIR
Xvfb :99 -screen 0 1024x768x24 &
export DISPLAY=:99Run node:
ros2 run lydlr_ai your_node_nameStart Xvfb display:
xvfb-run -s "-screen 0 1024x768x24" bashClean cache for package:
colcon build --packages-select lydlr_ai --cmake-clean-cacheFull clean:
cd ~/lydlr/lydlr_ws
rm -rf build/ install/ log/ros2 topic pub /camera/image_raw sensor_msgs/msg/Image "{
header: {frame_id: 'fake_camera'},
height: 3,
width: 2,
encoding: 'mono8',
is_bigendian: 0,
step: 2,
data: [0, 50, 100, 150, 200, 255]
}"VS Code: Reopen container
- Ctrl+Shift+P → "Docker: Reopen in Container"
- Terminal shows: "Connected to dev container: lydlr_ws"
Terminal 1 — build & env
source /opt/ros/humble/setup.bash
colcon build --symlink-install --packages-select lydlr_ai
source install/setup.bash
source .venv/bin/activate
export PYTHONPATH=$PYTHONPATH:/root/lydlr/lydlr_ws/src:/root/lydlr/lydlr_ws/.venv/lib/python3.10/site-packages
export XDG_RUNTIME_DIR=/tmp/runtime-root
mkdir -p $XDG_RUNTIME_DIR && chmod 700 $XDG_RUNTIME_DIR
Xvfb :99 -screen 0 1024x768x24 &
export DISPLAY=:99Run synthetic data publisher:
ros2 run lydlr_ai synthetic_multimodal_publisher.pyTerminal 2 — debugging
xvfb-run -s "-screen 0 1024x768x24" bashVS Code:
- Ctrl+Shift+D → "Run & Debug"
- Select: "Debug ROS2 optimizer_node (launch)"
- Press F5 or green button
Add breakpoints in optimizer_node.py.
Stop both terminals with Ctrl+C when done.
optimizer_node.py:ros2/src/lydlr_ai/lydlr_ai/optimizer_node.pysynthetic_publisher.py:ros2/src/lydlr_ai/lydlr_ai/synthetic_multimodal_publisher.py
- Use Debug Console to inspect variables
- Step over: F10
- Step in: F11
- Enable
"justMyCode": falseto debug libraries
This project is licensed under the GNU General Public License v3.0 (GPLv3).
You are free to use, copy, modify, and distribute this software under the terms of the GPLv3 license.
A copy of the GNU General Public License v3.0 is included in this repository or can be found at:
https://www.gnu.org/licenses/gpl-3.0.en.html
Disclaimer:
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.