PhD Thesis Prototype - Dominique S. Loyer
Citation Key: loyerModelingHybridSystem2025
Note
New in v2.2 (Jan 29, 2026):
- GraphRAG: Contextual memory from Knowledge Graph.
- Interactive Graph: D3.js visualization with physics and details on click.
- Cloud Ready: Docker & Supabase integration.
A neuro-symbolic AI system for verifying information credibility that combines:
- Symbolic AI: Rule-based reasoning with OWL ontologies (RDF/Turtle)
- Neural AI: Transformer models for sentiment analysis and NER
- IR Engine: BM25, TF-IDF, and PageRank estimation
The system provides explainable credibility scores (High/Medium/Low) with detailed factor breakdown.
Perfect for exploring the code, basic credibility checking without ML features:
pip install syscredIncludes PyTorch, Transformers, and all ML models for full credibility analysis:
pip install syscred[ml]Includes ML, production tools, and development dependencies:
pip install syscred[all]- Click the Kaggle or Colab badge above
- Enable GPU runtime
- Run All cells
# Clone the repository
git clone https://github.com/DominiqueLoyer/systemFactChecking.git
cd systemFactChecking/02_Code
# Run with Startup Script (Mac/Linux)
./start_syscred.sh
# Access at http://localhost:5001from syscred import CredibilityVerificationSystem
# Initialize
system = CredibilityVerificationSystem()
# Verify a URL
result = system.verify_information("https://www.lemonde.fr/article")
print(f"Score: {result['scoreCredibilite']} ({result['niveauCredibilite']})")
# Verify text directly
result = system.verify_information(
"According to Harvard researchers, the new study shows significant results."
)| Endpoint | Method | Description |
|---|---|---|
/api/verify |
POST | Full credibility verification |
/api/seo |
POST | SEO analysis only (faster) |
/api/ontology/stats |
GET | Ontology statistics |
/api/health |
GET | Server health check |
curl -X POST http://localhost:5000/api/verify \
-H "Content-Type: application/json" \
-d '{"input_data": "https://www.bbc.com/news/article"}'{
"scoreCredibilite": 0.78,
"niveauCredibilite": "HIGH",
"analysisDetails": {
"sourceReputation": "High",
"domainAge": 9125,
"sentiment": {"label": "NEUTRAL", "score": 0.52},
"entities": [{"word": "BBC", "entity_group": "ORG"}]
}
}systemFactChecking/ βββ README.md # This file βββ 01_Presentations/ # Presentations (.pdf, .tex) βββ 02_Code/ # Source Code & Docker β βββ syscred/ # β CORE ENGINE (v2.2) β β βββ graph_rag.py # [NEW] GraphRAG Module β β βββ verification_system.py β β βββ database.py # [NEW] Supabase Connector β β βββ ... β βββ start_syscred.sh # Startup Script β βββ Dockerfile # Deployment Config β βββ requirements.txt βββ 03_Docs/ # Documentation (.pdf) βββ 04_Bibliography/ # References (.bib, .pdf)
---
## π§ Configuration
Set environment variables or edit `02_Code/v2_syscred/config.py`:
```bash
# Optional: Google Fact Check API key
export SYSCRED_GOOGLE_API_KEY=your_key_here
# Server settings
export SYSCRED_PORT=5000
export SYSCRED_DEBUG=true
export SYSCRED_ENV=production # or development, testing
The system uses weighted factors to calculate credibility:
| Factor | Weight | Description |
|---|---|---|
| Source Reputation | 25% | Known credible sources database |
| Domain Age | 10% | WHOIS lookup for domain history |
| Sentiment Neutrality | 15% | Extreme sentiment = lower score |
| Entity Presence | 15% | Named entities (ORG, PER) |
| Text Coherence | 15% | Vocabulary diversity |
| Fact Check | 20% | Google Fact Check API results |
Thresholds:
- HIGH: Score β₯ 0.7
- MEDIUM: 0.4 β€ Score < 0.7
- LOW: Score < 0.4
- Modeling and Hybrid System for Verification of Sources Credibility (PDF)
- Ontology of a Verification System (PDF)
- Beamer Presentation - DIC9335 (PDF)
@software{loyer2025syscred,
author = {Loyer, Dominique S.},
title = {SysCRED: Neuro-Symbolic System for Information Credibility Verification},
year = {2026},
publisher = {GitHub},
url = {https://github.com/DominiqueLoyer/systemFactChecking}
}MIT License - See LICENSE for details.
| Version | Date | Changes |
|---|---|---|
| v2.0 | Jan 2026 | Complete rewrite with modular architecture, Kaggle/Colab support, REST API |
| v1.0 | Apr 2025 | Initial prototype with basic credibility scoring |