Stack · Transparency

The Zyphh AI Automation Stack: Every Tool We Use and Why

Full transparency on our 2025 stack: models, orchestration, retrieval, guardrails, evals, monitoring, hosting, and the reasons behind each choice.

8 min read Security-first Tooling
7 layersModel → Observability
QuarterlyBake-offs
SecurityPrivate-first
ComplianceControls baked-in

Our stack, layer by layer

Models. Claude/GPT/Gemini mix by task; local LLMs for data residency and cost control.
Orchestration. n8n/Make for business flows; custom services for scale and latency.
Retrieval. Vector DB + hybrid search; schema-first document prep and evals.
Guardrails. Prompt policies, refusals, redaction, approvals, and audit logs.
Evals. Automated checks for accuracy, safety, latency, and regressions.
Monitoring. Traces, cost alerts, drift detection, and feedback loops.
Hosting. Private cloud, VPC peering, and on-prem options when required.

Why this stack works

How we tune per client

  1. Discover constraints (compliance, budgets, latency, data sensitivity).
  2. Pick model/orchestration combos with guardrails aligned to policy.
  3. Run evals and pilots; monitor cost and quality.
  4. Document and hand off with playbooks, alerts, and rollback plans.
Tooling is only as good as the guardrails and measurement behind it. We design for both.
Audit my stack See more Zyphh builds

FAQ

Can you use our tools?

Yes—if they meet security and reliability standards; we adapt patterns to your stack.

Do you replace tools often?

Quarterly bake-offs decide; we swap when performance or cost changes materially.

How do you ensure safety?

Policy prompts, redaction, approvals, logging, evals, and human QA before scale.

What about observability?

Tracing, cost controls, incident playbooks, and continuous feedback channels.