You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AI systems are really good at being helpful and terrible at noticing when "helpful" becomes "enabling harm." Good-Faith fixes this by teaching pattern recognition: passive voice hiding accountability, false collective manufacturing consent, weaponized care violating boundaries.
Intrinsic moral reflexes for AI systems — to detect and prevent cruelty, protect the vulnerable, and initiate human intervention (where available), before harm occurs.
This repo provides an open-source decision engine (7 criteria) for evaluating tech-ethics trade-offs, from inequality risk to legacy impact. Built for quick peer adaptation and policy pilots.