The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
-
Updated
May 2, 2025 - TypeScript
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
Evaluate your LLM's response with Prometheus and GPT4 💯
[ICLR 2025] xFinder: Large Language Models as Automated Evaluators for Reliable Evaluation
xVerify: Efficient Answer Verifier for Reasoning Model Evaluations
CodeUltraFeedback: aligning large language models to coding preferences
This is the repo for the survey of Bias and Fairness in IR with LLMs.
Official implementation for "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?"
Code and data for "Timo: Towards Better Temporal Reasoning for Language Models" (COLM 2024)
Code and data for Koo et al's ACL 2024 paper "Benchmarking Cognitive Biases in Large Language Models as Evaluators"
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
The official repository for our EMNLP 2024 paper, Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability.
Harnessing Large Language Models for Curated Code Reviews
Root Signals Python SDK
A set of tools to create synthetically-generated data from documents
A set of examples demonstrating how to evaluate Generative AI augmented systems using traditional information retrieval and LLM-As-A-Judge validation techniques
A comprehensive study of the LLM-as-a-judge paradigm in a controlled setup that reveals new results about its strengths and weaknesses.
The official repository for our ACL 2024 paper: Are LLM-based Evaluators Confusing NLG Quality Criteria?
LLM-as-judge evals as Semantic Kernel Plugins
MCP for Root Signals Evaluation Platform
Add a description, image, and links to the llm-as-a-judge topic page so that developers can more easily learn about it.
To associate your repository with the llm-as-a-judge topic, visit your repo's landing page and select "manage topics."