GridscriptDonate

No-Code Tools for Pipelines

Pipelines in Gridscript can run powerful no-code stages so you can prepare, enrich, and validate data without writing a single line of code. Each stage runs in sequence, shares the same context, and can be combined with scripting stages whenever you need custom logic.

How no-code pipeline stages work

  • Shared context: Each stage reads from and writes to the pipeline context so downstream stages use the latest data.
  • Code optional: Mix these stages with scripting when you need dynamic logic or custom visualizations.
  • Run all or step-by-step: Execute individual stages while configuring them, or run the entire pipeline to reproduce the workflow end-to-end.

Stage catalog

The following no-code stages are available in Pipelines. You can chain them together to build repeatable, auditable workflows.

  • Import — Load data from files such as CSV, Excel, or JSON into the pipeline context. Choose a target name so later stages can reference the imported dataset.
  • Transform — Apply structured changes to your dataset: clean headers, format values, normalize text, or compute derived columns.
  • Filter — Keep only the rows that meet your conditions. Combine multiple rules with AND/OR logic and apply them to any column.
  • Sort — Reorder rows by one or more columns. Supports numeric, text, and date-aware sorting so results stay consistent across runs.
  • Merge — Join two datasets in the pipeline context using inner, left,right, or outer joins on matching keys to create a unified table.
  • Path — Run a JSONPath expression against the pipeline context to select or reshape data. Use it to extract nested fields, route subsets to new context paths, or branch datasets without duplicating content.
  • Visualize — Quickly render tables or charts from any dataset in the context. Great for sanity checks, sharing trends, or validating transformations visually.
  • Validate — Add quality gates by defining rules that data must satisfy. Flag missing values, type mismatches, out-of-range numbers, or duplicates before moving on.

Designing reliable no-code pipelines

  • Start with Import: Load the source data first so every downstream stage works from a clear baseline.
  • Validate early and late: Use Validate before merges to catch schema issues, and again before outputs to confirm final quality.
  • Visualize checkpoints: Drop in Visualize stages after major steps to confirm the pipeline is producing the expected output.