Skip to content
View Jscire0917's full-sized avatar
  • USA

Block or report Jscire0917

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. llm-safety-eval llm-safety-eval Public

    Model-agnostic fast api service to evaluate large language models for bias, toxicity and hallucinations.

    Python