Skip to main content

Pipeline API

A pipeline is the combination of all essential components, such as input and output connectors, as well as the target data model. It enables you to automate the process of importing data into your target system.

You also want to create pipelines within your system?

Currently, you cannot create pipelines via our API. If you want to create pipelines, you can do the following:

Use this base URL and add the corresponding endpoint respectively:

Base URL

api-gateway.getnuvo.com/dp/api/v1/

Update

Endpoint

PUT /pipeline/{id}

Payload

Attributes

name

The name of the pipeline

configuration

Defines the specific setup of your pipeline

input_connectors

The list of all input connectors used for this pipeline. Currently, we only support one input connector per pipeline. Find out more about connectors here

output_connectors

The list of all output connectors used for this pipeline. Currently, we only support one output connector per pipeline. Find out more about connectors here

mapping_config

Defines how the input columns are mapped to the target data model columns and how their values are transformed to meet the requirements of the target data model

mode

Defines whether nuvo AI is used to map input columns that haven’t been mapped yet to the output columns during future executions:

  • DEFAULT: nuvo AI is applied to unmapped input columns
  • EXACT: Only already mapped columns are used

mappings

The list of all target data model columns with their mapped input columns and applied transformations

source_columns

The columns from the input data mapped to the target_column

target_column

An output column from the given target data model

transformations

The transformations applied to map the input columns to the output column in the correct format

name

The name of the applied transformation

type

The type of transformation applied:

  • HYPER_FORMULA

function

The code or formula of the transformation, provided as a string

tdm

The ID of the set target data model

error_config

Defines how the pipeline should handle errors that might occur during pipeline execution

error_threshold

A number between 0 and 100, representing the allowed percentage of erroneous cells during a pipeline execution. For example, if it is set to 10, it means that pipeline executions with less than 10% erroneous cells will be considered successful and will not fail.

schedule_config

Defines when the pipeline is executed for the first and last time, as well as the interval at which it is executed

frequency

Sets how often the pipeline is executed. It is intertwined with interval. For example, if frequency is set to HOURLY and interval is set to 2, the pipeline is executed every 2 hours:

  • HOURLY
  • DAILY
  • WEEKLY
  • MONTHLY

interval

Sets the interval based on the frequency at which the pipeline is executed. For example, if interval is set to 2 and frequency is set to HOURLY, the pipeline is executed every 2 hours. The next execution cannot be scheduled further into the future than 1 year from the set start date and time

starts_on

The date and time when the pipeline is first executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). The date and time cannot be in the past

ends_on

The date and time when the pipeline is last executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). This date and time cannot be earlier than the start date and time

header_config

Defines how the header row is determined

type

Specifies whether nuvo's header detection is applied or if the set row_index is used to determine the header row:

  • SMART: nuvo's header detection is used to define the header row
  • STATIC: The row at the specified row_index is used as the header row

row_index

The index of the row that should be used as the header row if type is set to STATIC

developer_mode

Defines if the pipeline is executed in developer mode (true) or not (false). Use the developer mode to test pipelines in your testing environment. Pipeline executions in developer mode are free of charge. Deactivate it for production use. Please note that pipelines executed in developer mode will only output 100 rows

active

Indicates whether the pipeline is set to active (true) or inactive (false) after creation. When a pipeline is active it can be either executed by triggering the execution manually or based on the set schedule. An inactive pipeline cannot be executed in any way

Payload

{
"name": "string",
"configuration": {
"input_connectors": [
"string"
],
"output_connectors": [
"string"
],
"mapping_config": {
"mode": "DEFAULT",
"mappings": [
{
"source_columns": [
"string"
],
"target_column": "string",
"transformations": [
{
"name": "string",
"type": "HYPER_FORMULA",
"function": "string"
}
]
}
]
},
"tdm": "string",
"error_config": {
"error_threshold": 0
},
"schedule_config": {
"frequency": "HOURLY",
"interval": 0,
"starts_on": "2024-09-02T13:26:13.642Z",
"ends_on": "2024-09-02T13:26:13.642Z"
},
"header_config": {
"type": "SMART",
"row_index": 0
},
"developer_mode": 0
},
"active": 0
}

Response

Attributes

id

The ID of the pipeline

name

The name of the pipeline

active

Indicates whether the pipeline is set to active (true) or inactive (false) after creation. When a pipeline is active it can be either executed by triggering the execution manually or based on the set schedule. An inactive pipeline cannot be executed in any way

draft

Shows if the pipeline is in draft (true) or not (false). A pipeline in draft cannot be executed in any way

configuration

Defines the specific setup of your pipeline

input_connectors

The list of all input connectors used for this pipeline. Currently, we only support one input connector per pipeline. Find out more about connectors here

output_connectors

The list of all output connectors used for this pipeline. Currently, we only support one output connector per pipeline. Find out more about connectors here

mapping_config

Defines how the input columns are mapped to the target data model columns and how their values are transformed to meet the requirements of the target data model

mode

Defines whether nuvo AI is used to map input columns that haven’t been mapped yet to the output columns during future executions:

  • DEFAULT: nuvo AI is applied to unmapped input columns
  • EXACT: Only already mapped columns are used

mappings

The list of all target data model columns with their mapped input columns and applied transformations

source_columns

The columns from the input data mapped to the target_column

target_column

An output column from the given target data model

transformations

The transformations applied to map the input columns to the output column in the correct format

name

The name of the applied transformation

type

The type of transformation applied:

  • HYPER_FORMULA

function

The code or formula of the transformation, provided as a string

tdm

The ID of the set target data model

error_config

Defines how the pipeline should handle errors that might occur during pipeline execution

error_threshold

A number between 0 and 100, representing the allowed percentage of erroneous cells during a pipeline execution. For example, if it is set to 10, it means that pipeline executions with less than 10% erroneous cells will be considered successful and will not fail

schedule_config

Defines when the pipeline is executed for the first and last time, as well as the interval at which it is executed

frequency

Sets how often the pipeline is executed. It is intertwined with interval. For example, if frequency is set to HOURLY and interval is set to 2, the pipeline is executed every 2 hours:

  • HOURLY
  • DAILY
  • WEEKLY
  • MONTHLY

interval

Sets the interval based on the frequency at which the pipeline is executed. For example, if interval is set to 2 and frequency is set to HOURLY, the pipeline is executed every 2 hours. The next execution cannot be scheduled further into the future than 1 year from the set start date and time

starts_on

The date and time when the pipeline is first executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). The date and time cannot be in the past

ends_on

The date and time when the pipeline is last executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). This date and time cannot be earlier than the start date and time

header_config

Defines how the header row is determined

type

Specifies whether nuvo's header detection is applied or if the set row_index is used to determine the header row:

  • SMART: nuvo's header detection is used to define the header row
  • STATIC: The row at the specified row_index is used as the header row

row_index

The index of the row that should be used as the header row if type is set to STATIC

developer_mode

Defines if the pipeline is executed in developer mode (true) or not (false). Use the developer mode to test pipelines in your testing environment. Pipeline executions in developer mode are free of charge. Deactivate it for production use. Please note that pipelines executed in developer mode will only output 100 rows

created_at

The date and time when the pipline was first created

created_by

Information about whom created the pipeline

id

The ID of the user or sub-organization who created the pipeline

name

The name of the user or sub-organization who created the pipeline

identifier

The identifier of the user or sub-organization who created the pipeline

type

Defines the type of user who created the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

updated_at

The date and time when the pipeline was last updated

updated_by

Information about whom last updated the pipeline

id

The ID of the user or sub-organization who last updated the pipeline

name

The name of the user or sub-organization who last updated the pipeline

identifier

The identifier of the user or sub-organization who last updated the pipeline

type

Defines the type of user who last updated the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

Response

{
"data": {
"id": "string",
"name": "string",
"active": true,
"draft": true,
"configuration": {
"input_connectors": [
"string"
],
"output_connectors": [
"string"
],
"mapping_config": {
"mode": "string",
"mappings": [
{
"source_columns": [
"string"
],
"target_column": "string",
"transformations": [
{
"name": "string",
"type": "HYPER_FORMULA",
"function": "string"
}
]
}
]
},
"tdm": "string",
"error_config": {
"error_threshold": 0
},
"schedule_config": {
"frequency": "HOURLY",
"interval": 0,
"starts_on": "2024-08-28T15:18:27.477Z",
"ends_on": "2024-08-28T15:18:27.477Z"
},
"header_config": {
"type": "SMART",
"row_index": 0
},
"configuration_type": "PIPELINE",
"developer_mode": true
},
"createdAt": "2022-03-07 12:48:28.653",
"created_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
},
"updateAt": "2022-03-07 12:48:28.653",
"update_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
}
}
}

Read (by ID)

Endpoint

GET /pipeline/{id}

Response

Attributes

id

The ID of the pipeline

name

The name of the pipeline

active

Indicates whether the pipeline is set to active (true) or inactive (false) after creation. When a pipeline is active it can be either executed by triggering the execution manually or based on the set schedule. An inactive pipeline cannot be executed in any way

draft

Shows if the pipeline is in draft (true) or not (false). A pipeline in draft cannot be executed in any way.

configuration

Defines the specific setup of your pipeline

input_connectors

The list of all input connectors used for this pipeline. Currently, we only support one input connector per pipeline. Find out more about connectors here

output_connectors

The list of all output connectors used for this pipeline. Currently, we only support one output connector per pipeline. Find out more about connectors here

mapping_config

Defines how the input columns are mapped to the target data model columns and how their values are transformed to meet the requirements of the target data model

mode

Defines whether nuvo AI is used to map input columns that haven’t been mapped yet to the output columns during future executions:

  • DEFAULT: nuvo AI is applied to unmapped input columns
  • EXACT: Only already mapped columns are used

mappings

The list of all target data model columns with their mapped input columns and applied transformations

source_columns

The columns from the input data mapped to the target_column

target_column

An output column from the given target data model

transformations

The transformations applied to map the input columns to the output column in the correct format

name

The name of the applied transformation

type

The type of transformation applied:

  • HYPER_FORMULA

function

The code or formula of the transformation, provided as a string

tdm

The ID of the set target data model

error_config

Defines how the pipeline should handle errors that might occur during pipeline execution

error_threshold

A number between 0 and 100, representing the allowed percentage of erroneous cells during a pipeline execution. For example, if it is set to 10, it means that pipeline executions with less than 10% erroneous cells will be considered successful and will not fail

schedule_config

Defines when the pipeline is executed for the first and last time, as well as the interval at which it is executed

frequency

Sets how often the pipeline is executed. It is intertwined with interval. For example, if frequency is set to HOURLY and interval is set to 2, the pipeline is executed every 2 hours:

  • HOURLY
  • DAILY
  • WEEKLY
  • MONTHLY

interval

Sets the interval based on the frequency at which the pipeline is executed. For example, if interval is set to 2 and frequency is set to HOURLY, the pipeline is executed every 2 hours. The next execution cannot be scheduled further into the future than 1 year from the set start date and time

starts_on

The date and time when the pipeline is first executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). The date and time cannot be in the past

ends_on

The date and time when the pipeline is last executed, provided as a timestamp in UTC (e.g. 2024-09-02T13:26:13.642Z). This date and time cannot be earlier than the start date and time

header_config

Defines how the header row is determined

type

Specifies whether nuvo's header detection is applied or if the set row_index is used to determine the header row:

  • SMART: nuvo's header detection is used to define the header row
  • STATIC: The row at the specified row_index is used as the header row

row_index

The index of the row that should be used as the header row if type is set to STATIC

developer_mode

Defines if the pipeline is executed in developer mode (true) or not (false). Use the developer mode to test pipelines in your testing environment. Pipeline executions in developer mode are free of charge. Deactivate it for production use. Please note that pipelines executed in developer mode will only output 100 rows.

created_at

The date and time when the pipeline was first created

created_by

Information about whom created the pipeline

id

The ID of the user or sub-organization who created the pipeline

name

The name of the user or sub-organization who created the pipeline

identifier

The identifier of the user or sub-organization who created the pipeline

type

Defines the type of user who created the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

updated_at

The date and time when the pipeline was last updated

updated_by

Information about whom last updated the pipeline

id

The ID of the user or sub-organization who last updated the pipeline

name

The name of the user or sub-organization who last updated the pipeline

identifier

The identifier of the user or sub-organization who last updated the pipeline

type

Defines the type of user who last updated the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

Response

{
"data": {
"id": "string",
"name": "string",
"active": true,
"draft": false,
"configuration": {
"developer_mode": true,
"input_connectors": [
"string"
],
"output_connectors": [
"string"
],
"tdm": "string",
"header_config": {
"type": "SMART",
"row_index": 0
},
"mapping_config": {
"mode": "DEFAULT",
"mappings": [
{
"source_columns": [
"string"
],
"target_column": "string",
"transformations": [
{
"name": "string",
"type": "HYPER_FORMULA",
"function": "string"
}
]
}
]
},
"error_config": {
"error_threshold": 0
}
},
"created_at": "2022-03-07 12:48:28.653",
"created_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
},
"updated_at": "2022-03-07 12:48:28.653",
"updated_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
}
}
}

Read (all)

To further refine the response you can use query parameters like sort, filters, pagination and options. Look at a more detailed explanation here.

Endpoint

GET /pipeline/

Response

Attributes

id

The ID of the pipeline

name

The name of the pipeline

active

Indicates whether the pipeline is set to active (true) or inactive (false) after creation. When a pipeline is active it can be either executed by triggering the execution manually or based on the set schedule. An inactive pipeline cannot be executed in any way

draft

Shows if the pipeline is in draft (true) or not (false). A pipeline in draft cannot be executed in any way

created_at

The date and time when the pipeline was first created

created_by

Information about whom created the pipeline

id

The ID of the user or sub-organization who created the pipeline

name

The name of the user or sub-organization who created the pipeline

identifier

The identifier of the user or sub-organization who created the pipeline

type

Defines the type of user who created the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

updated_at

The date and time when the pipeline was last updated

updated_by

Information about whom last updated the pipeline

id

The ID of the user or sub-organization who last updated the pipeline

name

The name of the user or sub-organization who last updated the pipeline

identifier

The identifier of the user or sub-organization who last updated the pipeline

type

Defines the type of user who last updated the pipeline:

  • USER: A user of your organization
  • SUB_ORG: A sub-organization that is part of your organization

pagination

An object containing metadata about the result

total

The number of entries in the data array

offset

The offset set in the request parameters

limit

The limit set in the request parameters

Response

{
"data": [
{
"id": "string",
"name": "test",
"active": true,
"draft": false,
"created_at": "2022-03-07 12:48:28.653",
"created_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
},
"updated_at": "2022-03-07 12:48:28.653",
"updated_by": {
"id": "string",
"name": "string",
"identifier": "string",
"type": "USER"
}
}
],
"pagination": {
"total": 0,
"offset": 0,
"limit": 0
}
}

Delete

Endpoint

DELETE /pipeline/{id}

Response

Attributes

message

Message confirming the deletion of the pipeline or providing an error message

Response

{
"data": {
"message": "string"
}
}