🇯🇵 Langfuse Cloud Japan is live →
← Back to changelog
May 15, 2026

Choose columns for blob storage exports

Picture Niklas SemmlerNiklas Semmler

Pick which field groups (input/output, metadata, usage, tools, …) are written to each row in scheduled S3, GCS, and Azure exports. Shrink files and drop fields you don't want to land in your warehouse.

You can now select which column groups land in your scheduled blob storage exports. Eleven groups cover the enriched observations row — toggle off the ones you don't need on a per-integration basis in Project Settings → Integrations → Blob Storage.

Concrete cases this unlocks:

  • Drop metadata for privacy. Keep user data out of your warehouse without filtering downstream.
  • Drop io to shrink files. Inputs and outputs are usually the largest columns; deselecting them produces dramatically smaller exports for cost or latency analytics.
  • Drop tools and prompt when your downstream consumer only needs traces, timings, and cost.

The core group (id, trace_id, start_time, end_time, project_id, parent_observation_id, type) is required and always exported. The other ten groups — basic, time, io, metadata, model, usage, prompt, metrics, tools, trace_context — are individually toggleable. Existing integrations continue to export all groups; no action needed unless you want to narrow the schema.

Pricing enrichment (input_price, output_price, total_price, usage_pricing_tier_name) is gated on the usage group, so deselecting usage also skips the worker-side model lookup.

Field groups apply to the Enriched observations export source (and the enriched portion of the combined legacy + enriched source). The legacy Traces and observations source still uses its fixed column set.

The same controls are available on the REST API via exportSource and exportFieldGroups on GET/PUT /api/public/integrations/blob-storage.

Learn more


Was this page helpful?