News updates
Follow Savalera announcements, research milestones, and Arca product progress in one stream.
Latest Updates
Mar 18, 2026
Savalera Agents Hub is now Arca - your platform to bring AI business ideas to end users quickly
Arca gives your agents core business capabilities; a chat UI, chat history, user memory, authentication, RBAC, REST + WebSocket API, session management, observability, evals, and more.
Dec 16, 2025
Agents Hub Becoming the Core Product of Savalera
Our agent accelerator that provides chat UI, persistence, auth, evals, and more.
Nov 07, 2025
ISEAR Emotion Dataset and Classifier Model Released on HuggingFace
We've released our ISEAR based emotion classifier and dataset to support research on emotional expression and to enable human centered applications.
Sep 02, 2025
Savalera Agents Hub: Internal Deployment of X-Series Agents Now Live
We've deployed our first agent platform featuring ax1 and vx1 architectures for internal use across research, data analysis, and content creation workflows.
Jul 15, 2025
New Research Project: Automated Leadership Style Classification for Team Development
Savalera Lab launches investigation into natural language processing approaches for identifying leadership behavioral patterns to support coaching and team development applications.
Jul 09, 2025
Savalera X-Series Agents: Building Next-Generation Tool-Using Agent Architectures for Complex Problem Solving
Developing three distinct agent architectures—ax1, vx1, and hx1—as the foundation for our upcoming commercial agent offerings, applying project management methodologies to workflow planning and execution.
Jun 30, 2025
Complete Study Released: 'The Toxicity Echo Effect' - First Comprehensive Analysis of Harmful Language Spread in Language Model Conversations
Our full research paper reveals how language models systematically repeat toxic input rather than generating original harmful content, with implications for workplace health and AI safety protocols.
Jun 23, 2025
Complete Study Coming Next Week: Full Analysis of the Toxicity Echo Effect in Language Models
Our full research paper will reveal the complete methodology behind the toxicity echo effect discovery, including detailed statistical analysis, health impact assessment, and actionable recommendations for safe deployment.
May 22, 2025
Model-Specific Vulnerability Patterns Reveal Critical Safety Gaps as Enterprises Explore Open-Weight Models
Research across six open-weight language models shows distinct failure profiles, with some generating seven times more toxic responses per compromised dialogue as organizations increasingly evaluate medium-sized models for enterprise deployment.
May 21, 2025
AgentDialogues Framework Beta 1 Released: Open-Source Tool for Multi-Turn Language Model Research
We've open-sourced the research framework powering our toxicity studies, enabling researchers to conduct controlled multi-turn conversations between language models with built-in analytics and dataset generation.
May 08, 2025
Our Research Connects Language Model Toxicity to $2B Daily Workplace Productivity Losses
Analysis reveals how the toxicity echo effect in language models may amplify existing workplace incivility that already affects 98% of employees and triggers measurable physiological stress responses.
Apr 15, 2025
Breakthrough: We've Identified the 'Toxicity Echo Effect' in Language Model Conversations
Our new research reveals that 96.77% of toxic language model failures involve systematic repetition of harmful input, with compromised dialogues averaging 2.2 toxic responses per conversation.
Mar 28, 2025
Early Toxicity Research Results Show 58x Imbalance Between Language Models
Preliminary findings from 850 simulated conversations reveal language models' strong resistance to generating new toxic content, but critical vulnerabilities in how they process and repeat harmful input without circuit-breaking mechanisms.
Feb 26, 2025
Savalera Lab Launches First Research Project on LLM Toxicity Dynamics
We've launched a new research initiative studying how large language models respond to sustained toxic input across multi-turn conversations. Here's what we're testing — and why we do it.
Feb 17, 2025
Announcing the Savalera Lab
We are launching the Savalera Lab to research AI agents, their behavior, decision-making, evaluation, ethics and security, while integrating AI into the core of our product work.
Feb 05, 2025
Reflections on Building a Creative AI Agent for NFTs
Looking back on our experience building an AI agent for creativity — what we learned, what worked, and where we go next.
Jan 19, 2025
Our Research Focus: Understanding AI Agents as Systems and Behaviors
AI agents are more than just tools — they could help us explore human behavior, creativity, and the future of collaboration. This is what we believe at Savalera.
Need specific updates?
For product-specific details, go to Arca. For long-form material, use docs and research sections.