All Projects
AI/MLAgents

AI Procurement Decision Support

Reduced analyst time spent on data retrieval. Natural language interface opened procurement data to non-technical stakeholders. Full audit trail maintained on every query and response.

AI procurement decision support tool showing natural language interface with cited answers

Analyst interface: plain-language queries return structured answers with per-claim source citations

// The Challenge

Procurement analysts at complex organisations spend most of their working time retrieving and cross-referencing data rather than making decisions. Supplier records, compliance documentation, and historical performance data live in separate systems with inconsistent formats, making any cross-dataset question a multi-hour manual exercise. Critical procurement decisions are routinely made on incomplete information because full retrieval takes too long.

// Our Approach

Built a natural language interface over a structured retrieval pipeline that queries multiple procurement data sources simultaneously, reconciles schema differences at the retrieval layer, and presents answers with source citations and confidence indicators. The system is designed explicitly as an analyst augmentation tool: it surfaces evidence and provenance, the analyst retains final judgement. Every query and response is logged to an immutable audit trail, satisfying compliance requirements for procurement decisions.

// System Modules
3 of 3
// RETRIEVAL PIPELINE

Structured retrieval pipeline

Multi-source document and database retrieval with schema normalisation

Ingests supplier records, tender histories, compliance certificates, and performance scorecards from heterogeneous source systems. A normalisation layer reconciles inconsistent field names, date formats, and categorical encodings before any retrieval occurs, ensuring queries that span multiple sources return coherent, comparable results.

SupplierRecordsComplianceDocsPerformanceDataSchema NormalisationVector Indexing LayerRetrieval Engine: QueryChunk Retrieval → Re-ranking → Context Assembly → LLM Prompt
Retrieval pipeline: three heterogeneous data sources normalised into a unified index before context assembly for the LLM

Key Capabilities

  • Connectors for structured database and document repository sources
  • Schema normalisation across inconsistent supplier record formats
  • Semantic chunking of long-form compliance documents for dense retrieval
  • Metadata tagging for source attribution on every retrieved chunk
  • Incremental index updates on new document ingestion
  • Query rewriting for improved retrieval precision on ambiguous intent
// QUERY INTERFACE

Natural language query interface

Plain-language procurement queries with cited, confidence-scored answers

Analysts type questions in plain English: supplier comparisons, compliance status checks, historical anomaly queries, or vendor risk assessments. The system decomposes the query, retrieves relevant evidence from the normalised index, and returns a structured answer with each claim linked to its source document and a confidence indicator. Non-technical stakeholders can use the same interface without training.

Analyst interface: natural language query bar with structured response, source citations, and confidence indicators per claim
Analyst interface: natural language query bar with structured response, source citations, and confidence indicators per claim

Key Capabilities

  • Natural language query parsing with intent classification
  • Multi-hop reasoning across supplier, compliance, and performance data
  • Per-claim source citation linked to original document
  • Confidence scoring based on retrieval quality
  • Comparison queries: rank vendors on specified criteria
  • Clarification prompts for ambiguous or underspecified queries
// AUDIT TRAIL

Audit and compliance trail

Immutable record of every query, retrieval, and response

Every interaction is logged: query text, retrieved chunks, LLM response, analyst identity, and timestamp. The log is append-only and structured for export to compliance reporting formats. Supervisors can review any decision for which procurement data was queried, inspect the evidence the system surfaced, and verify the analyst's determination against the AI-provided context.

USERQUERYRETRIEVALCONTEXTLLMRESPONSEAUDIT LOGquery · chunks · responseidentity · timestampEXPORT /REPORT
Audit chain: every query, the retrieval context used, and the LLM response are captured as a single immutable log entry exportable for compliance review

Key Capabilities

  • Append-only query log with analyst identity and timestamp
  • Full retrieval context stored per query
  • Exportable audit report per decision or time window
  • Role-based access: analyst vs. compliance review vs. admin
  • Retention policy configuration for long-running programmes
  • Zero external dependency: all logging local to deployment
// Technical Complexity
  • Hallucination mitigation in a procurement context is non-trivial: the model must correctly distinguish between a supplier having no compliance record vs. a record simply not yet ingested into the index, requiring explicit handling of absence vs. missing evidence at the retrieval layer.
  • Confidence scoring must reflect retrieval quality rather than model output probability alone, requiring a scoring layer that measures chunk relevance, source coverage, and query-answer alignment separately from the LLM's self-assessed certainty.
  • The audit trail must capture the full retrieval context at query time so a compliance reviewer can verify what evidence the system had access to, not just what answer it produced. This requires storing the complete retrieved chunk set alongside the response rather than just the final output.
  • Schema reconciliation across procurement data sources with inconsistent categorical encodings (e.g., "Active Supplier", "Approved", "Whitelisted" all meaning the same thing across different systems) requires a normalisation step that does not conflate legitimately distinct categories.
// Stack & Methods
LLMRAGPythonLangChainVector DBFastAPIPostgreSQLAudit LoggingEnterprise AI