Savalera Agents Hub: Internal Deployment of X-Series Agents Now Live
We've deployed our first agent platform featuring ax1 and vx1 architectures for internal use across research, data analysis, and content creation workflows.
Follow our client work and research.
We've deployed our first agent platform featuring ax1 and vx1 architectures for internal use across research, data analysis, and content creation workflows.
Agentic Lab launches investigation into natural language processing approaches for identifying leadership behavioral patterns to support coaching and team development applications.
Developing three distinct agent architectures—ax1, vx1, and hx1—as the foundation for our upcoming commercial agent offerings, applying project management methodologies to workflow planning and execution.
Our full research paper reveals how language models systematically repeat toxic input rather than generating original harmful content, with implications for workplace health and AI safety protocols.
Our full research paper will reveal the complete methodology behind the toxicity echo effect discovery, including detailed statistical analysis, health impact assessment, and actionable recommendations for safe deployment.
Research across six open-weight language models shows distinct failure profiles, with some generating seven times more toxic responses per compromised dialogue as organizations increasingly evaluate medium-sized models for enterprise deployment.
We've open-sourced the research framework powering our toxicity studies, enabling researchers to conduct controlled multi-turn conversations between language models with built-in analytics and dataset generation.
Analysis reveals how the toxicity echo effect in language models may amplify existing workplace incivility that already affects 98% of employees and triggers measurable physiological stress responses.
Our new research reveals that 96.77% of toxic language model failures involve systematic repetition of harmful input, with compromised dialogues averaging 2.2 toxic responses per conversation.
Preliminary findings from 850 simulated conversations reveal language models' strong resistance to generating new toxic content, but critical vulnerabilities in how they process and repeat harmful input without circuit-breaking mechanisms.
We've launched a new research initiative studying how large language models respond to sustained toxic input across multi-turn conversations. Here's what we're testing — and why we do it.
We are launching the Savalera Agentic Lab to research AI agents, their behavior, decision-making, evaluation, ethics and security, while integrating AI into the core of our consulting work.
Looking back on our experience building an AI agent for creativity — what we learned, what worked, and where we go next.
AI agents are more than just tools — they could help us explore human behavior, creativity, and the future of collaboration. This is what we believe at Savalera.