News and Announcements

Sep 02, 2025

Savalera Agents Hub: Internal Deployment of X-Series Agents Now Live

We've deployed our first agent platform featuring ax1 and vx1 architectures for internal use across research, data analysis, and content creation workflows.

Jul 15, 2025

New Research Project: Automated Leadership Style Classification for Team Development

Agentic Lab launches investigation into natural language processing approaches for identifying leadership behavioral patterns to support coaching and team development applications.

Jul 09, 2025

Savalera X-Series Agents: Building Next-Generation Tool-Using Agent Architectures for Complex Problem Solving

Developing three distinct agent architectures—ax1, vx1, and hx1—as the foundation for our upcoming commercial agent offerings, applying project management methodologies to workflow planning and execution.

Jun 30, 2025

Complete Study Released: 'The Toxicity Echo Effect' - First Comprehensive Analysis of Harmful Language Spread in Language Model Conversations

Our full research paper reveals how language models systematically repeat toxic input rather than generating original harmful content, with implications for workplace health and AI safety protocols.

Jun 23, 2025

Complete Study Coming Next Week: Full Analysis of the Toxicity Echo Effect in Language Models

Our full research paper will reveal the complete methodology behind the toxicity echo effect discovery, including detailed statistical analysis, health impact assessment, and actionable recommendations for safe deployment.

May 22, 2025

Model-Specific Vulnerability Patterns Reveal Critical Safety Gaps as Enterprises Explore Open-Weight Models

Research across six open-weight language models shows distinct failure profiles, with some generating seven times more toxic responses per compromised dialogue as organizations increasingly evaluate medium-sized models for enterprise deployment.

May 21, 2025

AgentDialogues Framework Beta 1 Released: Open-Source Tool for Multi-Turn Language Model Research

We've open-sourced the research framework powering our toxicity studies, enabling researchers to conduct controlled multi-turn conversations between language models with built-in analytics and dataset generation.

May 08, 2025

Our Research Connects Language Model Toxicity to $2B Daily Workplace Productivity Losses

Analysis reveals how the toxicity echo effect in language models may amplify existing workplace incivility that already affects 98% of employees and triggers measurable physiological stress responses.

Apr 15, 2025

Breakthrough: We've Identified the 'Toxicity Echo Effect' in Language Model Conversations

Our new research reveals that 96.77% of toxic language model failures involve systematic repetition of harmful input, with compromised dialogues averaging 2.2 toxic responses per conversation.

Mar 28, 2025

Early Toxicity Research Results Show 58x Imbalance Between Language Models

Preliminary findings from 850 simulated conversations reveal language models' strong resistance to generating new toxic content, but critical vulnerabilities in how they process and repeat harmful input without circuit-breaking mechanisms.

Feb 26, 2025

Agentic Lab Launches First Research Project on LLM Toxicity Dynamics

We've launched a new research initiative studying how large language models respond to sustained toxic input across multi-turn conversations. Here's what we're testing — and why we do it.

Feb 17, 2025

Announcing the Savalera Agentic Lab

We are launching the Savalera Agentic Lab to research AI agents, their behavior, decision-making, evaluation, ethics and security, while integrating AI into the core of our consulting work.

Feb 05, 2025

Reflections on Building a Creative AI Agent for NFTs

Looking back on our experience building an AI agent for creativity — what we learned, what worked, and where we go next.

Jan 19, 2025

Our Research Focus: Understanding AI Agents as Systems and Behaviors

AI agents are more than just tools — they could help us explore human behavior, creativity, and the future of collaboration. This is what we believe at Savalera.