1

Add target agent (if you haven't already)

from maihem import Maihem

maihem_client = Maihem()

maihem_client.add_target_agent(
    name="financial-assistant-x",
    label="Financial Assistant Company X", # Optional
    role="AI Financial Assistant",
    description="An AI assistant that provides information and summaries from financial documents."
    language="en" # (Optional) Default is "en" (English), follow ISO 639
)
2

Add a decorator to each step of your workflow

This is an example of a basic RAG workflow. Add a decorator to each step of the workflow as shown below.

See a full list of supported evaluators and metrics and their required input and output maps.

Example agent workflow in Python
import maihem
from maihem.evaluators import EndToEndEvaluator, AnswerGenerationEvaluator, ContextRetrievalEvaluator

class Agent:

    @maihem.workflow_step(
        target_agent_name="financial-assistant-x",
        workflow_name="2 step RAG workflow",
        evaluator=EndToEndEvaluator(
            input_query="message",
            output_answer=lambda x: x # Map the output of your function that contains the answer
        )
    )
    def run_workflow(conversation_id: str, message: str) -> str:
        """Trigger workflow to generate a response"""
        contexts = context_retrieval(conversation_id, message)
        message = generate_answer(conversation_id, message, contexts)
        return message

    @maihem.workflow_step(
        name="Context Retrieval", # (Optional) Name of the step to be displayed in Maihem's dashboard
        evaluator=ContextRetrievalEvaluator(
            input_query="message",
            output_contexts=lambda x: x # Map the output of your function that contains the answer
        )
    )
    def context_retrieval(conversation_id: str, message: str) -> str:
        """Retrieve a list of chunks to be used as context for the LLM."""
        contexts = retrieve_contexts(message)
        return contexts

    @maihem.workflow_step(
        name="Answer Generation", # (Optional) Name of the step to be displayed in Maihem's dashboard
        evaluator=AnswerGenerationEvaluator(
            input_query="message",
            input_contexts="contexts",
            output_answer=lambda x: x # Map the output of your function that contains the answer
        )
    )
    def generate_answer(conversation_id: str, message: str, contexts: List) -> str:
        """Generate a response using a list of retrieved contexts"""
        answer = call_llm(message, contexts) # Example of an LLM call
        return answer
3

Monitor workflow in Maihem UI

Go to your Maihem account, where you can analyze failures that occurred in your workflow in production.