Every integration engineer has done this dance:
- change a mapping
- run the pipeline
- wait
- see a failure
- change a field
- run again
- wait again
- discover the failure is in a completely different step
It’s not that we love it. It’s just how pipelines have traditionally been tested: in production-like conditions, with real dependencies, and a lot of luck.
January’s update is about replacing luck with something more respectable:
Testing Mode in the Pipeline Builder — powered by Pinned Data.
What is Testing Mode?
Testing Mode lets you run pipeline logic without affecting actual pipeline execution.
That means you can validate:
- routing
- filtering
- enrichment
- merge behavior
- connector payload structure
…without accidentally pushing real updates to a marketplace or polluting production logs with “test objects”.
The results are still observable (tracked like normal executions), but they’re flagged as test runs.
Pinned Data: the concept (simple and powerful)
Pinned Data is a mechanism that replaces the live output of a processor with fixed mock data.
In normal runs:
- a processor does its real job (HTTP call, connector sync, etc.)
- output depends on external systems, network, timing
In test runs with pinned data:
- the processor’s real work is skipped
- the output becomes deterministic
- downstream steps can be tested with certainty
This is the classic integration engineer’s dream:
> “Let me test the next steps, even if the external API is flaky today.”
Three testing scenarios (so you can test like a grown-up)
1) Test the full pipeline end-to-end (with pinned entry data)
You provide pinned data for the entry processor, simulating the trigger object.
Result: the pipeline runs end-to-end on a controlled input, producing deterministic outputs.
2) Test a single processor using real historical context
You provide a previous Pipeline Execution ID.
Qilin loads the execution context from that run and re-executes only the processor you’re testing.
This is perfect when:
- you changed an enrichment rule
- you adjusted a filter predicate
- you want to replay one step against real production-like data
3) Test a processor in isolation (no historical context needed)
You provide pinned input directly for the processor.
Result: fast, isolated, great for validating configs and behavior without waiting for upstream steps.
Share your Qilin.Cloud Success Story
A very real use case: “test enrichment without calling the pricing API”
You want to verify enrichment logic that depends on a pricing service.
But the pricing API is unstable today.
Pinned data lets you fake the pricing output so you can test:
- calculations
- mapping
- downstream connector payloads
No external dependency. No waiting. No chaos.
Why this matters (especially to agencies)
Agencies live and die by delivery speed and reliability.
Testing Mode enables a cleaner workflow:
- build pipeline logic
- test with pinned scenarios
- validate edge cases
- only then turn on live execution
That’s how you reduce go-live risk without multiplying custom tooling.
For developers
- deterministic tests for pipeline logic
- faster iteration loops
- easier debugging (“same input, same output”)
- safer experimentation with advanced processors (merge, switch-case, enrichment, http calls)
For merchants and investors
- fewer production incidents caused by “untested config changes”
- faster onboarding because pipelines can be validated before real data flows
- stronger platform trust: changes are intentional and verifiable
What’s next
February will continue the “operational control” theme:
- expanded UI management for queue storage and credentials
- more advanced routing and processor configuration in the portal
Because once pipelines grow, operations needs knobs – not prayers.
Testing shouldn’t require bravery
The traditional way of testing pipelines is stressful because it mixes two activities:
- validating logic
- operating production
Testing Mode separates them.
And honestly? That’s long overdue.
0 Comments