Sep 02, 2025
We've deployed our first agent platform featuring ax1 and vx1 architectures for internal use across research, data analysis, and content creation workflows.
Jul 15, 2025
Agentic Lab launches investigation into natural language processing approaches for identifying leadership behavioral patterns to support coaching and team development applications.
Jul 09, 2025
Developing three distinct agent architectures—ax1, vx1, and hx1—as the foundation for our upcoming commercial agent offerings, applying project management methodologies to workflow planning and execution.
Jun 30, 2025
Our full research paper reveals how language models systematically repeat toxic input rather than generating original harmful content, with implications for workplace health and AI safety protocols.
Jun 23, 2025
Our full research paper will reveal the complete methodology behind the toxicity echo effect discovery, including detailed statistical analysis, health impact assessment, and actionable recommendations for safe deployment.
May 22, 2025
Research across six open-weight language models shows distinct failure profiles, with some generating seven times more toxic responses per compromised dialogue as organizations increasingly evaluate medium-sized models for enterprise deployment.
May 21, 2025
We've open-sourced the research framework powering our toxicity studies, enabling researchers to conduct controlled multi-turn conversations between language models with built-in analytics and dataset generation.
May 08, 2025
Analysis reveals how the toxicity echo effect in language models may amplify existing workplace incivility that already affects 98% of employees and triggers measurable physiological stress responses.
Apr 15, 2025
Our new research reveals that 96.77% of toxic language model failures involve systematic repetition of harmful input, with compromised dialogues averaging 2.2 toxic responses per conversation.
Mar 28, 2025
Preliminary findings from 850 simulated conversations reveal language models' strong resistance to generating new toxic content, but critical vulnerabilities in how they process and repeat harmful input without circuit-breaking mechanisms.
Feb 26, 2025
We've launched a new research initiative studying how large language models respond to sustained toxic input across multi-turn conversations. Here's what we're testing — and why we do it.
Feb 17, 2025
We are launching the Savalera Agentic Lab to research AI agents, their behavior, decision-making, evaluation, ethics and security, while integrating AI into the core of our consulting work.
Feb 05, 2025
Looking back on our experience building an AI agent for creativity — what we learned, what worked, and where we go next.
Jan 19, 2025
AI agents are more than just tools — they could help us explore human behavior, creativity, and the future of collaboration. This is what we believe at Savalera.