LLM agents and production systems at GriffinAI

Role: Senior AI/ML Engineer

Stack: Python, Docker, CI/CD, AWS, containerized services, Slack/Telegram integrations

Outcomes

  • Transaction Execution Agent — hybrid pipelines; ~2× faster avg responses (TODO: exact latency baseline); lower token cost via routing and LLM call optimizations; safe tool execution (contracts, guardrails, HITL)
  • Cardano Proposal Examiner — built v1 in ~2 weeks; governance knowledge graph and graph-of-thought–style structured reasoning; increased transparency and control
  • Goal-driven autonomous multi-agent system with ops alerts and output channels (TG/Twitter)
  • Tool contracts, approval gates, and audit logs for high-stakes agent actions

Links

Context: GriffinAI, LLM Agents & Web3 (2024 — Present). Owned end-to-end architecture and delivery; aligned cross-functional stakeholders and drove execution under ambiguity.

Production deployments used containerized services and secure cloud integrations. Metrics for token cost and latency are from internal routing/optimization work; exact baselines are TODO for public case study.