Import Stage
The Import stage loads a local file into your pipeline context under a field name you choose. Use it to bring CSV, Excel, or JSON data into the workflow so downstream stages can filter, transform, or visualize it.
What the stage does
- Supported formats — Parses
.csv,.xlsx, and .json files. Other extensions in the picker are ignored by the parser. - Context output — Writes the imported dataset to the pipeline context under the specified field name.
- Field name validation — Must not be empty, may only include letters, numbers, and underscores, and cannot start with a space or number.
- Grid-friendly cloning — CSV/Excel imports are deep-cloned to keep the grid data immutable across runs.
- Progress feedback — Logs errors for missing file or invalid name; marks the stage success on a valid import.
Configure the Import stage
- Add an Import stage to your pipeline.
- Enter a Field Name
- Click Upload File and choose a
.csv, .xlsx, or .json file. - Run the stage (or Run All) to load the file. The dataset is stored in the context under your field name.
Example: load sales.csv
- File:
sales.csv - Field Name:
sales
After running the stage, the pipeline context contains sales with the imported rows ready for Filter, Sort, or Visualize stages and more.
Tips for reliable imports
- Use clear field names: Pick a concise name that reflects the dataset (e.g.,
orders_q1). - Validate early: Follow with a Validate stage to check required columns or types before filtering.
- Branch if needed: If you want both raw and cleaned versions, import once, then write filtered results to a different context path.