Blackline AI Logo
Blackline AI

Technology Stack

The tools and frameworks we use to build production AI systems.

Agent Orchestration

  • LangChain — Framework for building LLM applications
  • LangGraph — Stateful, multi-agent workflows
  • CrewAI — Multi-agent collaboration framework
  • OpenAI Structured Outputs — Type-safe LLM responses

RAG & Vector Databases

  • pgvector — PostgreSQL extension for vector search
  • Pinecone — Managed vector database
  • Weaviate — Open-source vector database
  • Qdrant — Vector similarity search engine

Observability & Evaluation

  • Langfuse — LLM observability and analytics
  • Arize Phoenix — LLM evaluation and monitoring
  • OpenAI Evals — Evaluation framework
  • RAGAS — RAG evaluation metrics

Safety & Guardrails

  • NVIDIA NeMo Guardrails — Safety framework
  • Llama Guard — Content moderation
  • Custom validation — Domain-specific rules

Model Serving

  • vLLM — High-throughput LLM serving
  • Text Generation Inference (TGI) — Hugging Face inference server
  • NVIDIA Triton — Multi-framework inference

Cloud GenAI

  • Azure OpenAI — Microsoft's managed OpenAI service
  • AWS Bedrock — Amazon's managed AI service
  • Google Vertex AI — Google Cloud AI platform

Workflow Automation

  • n8n — Open-source workflow automation
  • Make (Integromat) — Visual automation platform
  • Zapier — Integration platform

Document Processing

  • AWS Textract — Document text extraction
  • Google Cloud Vision — OCR and image analysis
  • Python PDF parsers — Custom document processing

Why This Stack?

We choose tools that are production-ready, well-documented, and have strong community support. Our stack prioritizes observability, safety, and scalability from day one.

Every project includes proper monitoring, evaluation, and guardrails—not as an afterthought, but as core requirements for production AI systems.