Project Backed By



Project Awarded By
Turn everyday users into AI bias auditors and give researchers unprecedented data to make models more trustworthy.
We designed a platform that uses gamification to motivate users to discover and report AI bias, while providing researchers and AI teams with audit data to evaluate responsible AI performance and guide model refinement.
Product Type
Dashboard, AI
Duration
Team
Responsibility
Research, Prototyping,
User Testing
WeAudit’s scoreboard keeps users engaged by combining recognition and purpose. Points, badges, and leaderboards reward progress, fostering competition and credibility.
Users can enter a prompt, switch AI models to compare outputs, and uncover biases in how concepts are interpreted. Inspiration tags highlight potential harms, while example prompts guide meaningful comparisons.
By comparing two AI-generated outputs side by side, users can spot inconsistencies, assess potential harms, and make more informed evaluations. The scaffolded process provides prompts and categories to ensure a thorough, systematic audit, making AI bias detection more intuitive and actionable.
The Forum page fosters collaboration by enabling users to discuss AI bias findings, share insights, and engage in ethical AI conversations.
The dashboard provides users with a clear, data-driven overview of AI bias trends. Through visualizations and analytics, AI teams can track patterns, monitor audit contributions, and gain actionable insights to drive more transparent AI systems.
The table organizes audit findings into a structured, detailed format, making it easy to compare results, identify patterns, and filter key information.