Connector API
A connector can be used to define the input source, where the imported data is coming from, or to determine the target, where the mapped and transformed data should be sent.
Currently, we support the following types of connectors:
You can send the following file types:
- XLS(X)
- CSV
- TSV
- XML
- JSON
Use this base URL and add the corresponding endpoint respectively:
Base URL
Create
Endpoint
Payload
Attributes
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
configuration
Defines the specific setup of your connector based on the type
and node_type
E-mail connectors
recipients
The list of e-mail addresses to which the mapped and transformed data should be sent
filename
The name of the file, without the file extension, that will be sent to the recipient(s), e.g. "my_output"
extension
Allowed file types are:
- XLSX
- XML
- CSV
(S)FTP connectors
host
The host address, e.g. "sftp.domain.com"
port
The port number
protocol
Defines the type of server that you’re using:
FTP
SFTP
username
The (S)FTP username to log into the server
password
The (S)FTP password to log into the server. For SFTP connectors you can also use an SSH private key instead of a password in secret_key
secret_key
The SFTP SSH private key, which can be used as an alternative to a password
directory_path
The path to the directory where files are stored on the (S)FTP server
filename
The name of the file, without the file extension, that nuvo will send, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
HTTP(S) connectors
url
The endpoint that is called to receive data or where processed data is sent. For event-based input connectors, it’s the endpoint from which data is sent
method
REST API method, depending on the type of HTTP connector you’re creating. Currently, you can choose the following options:
GET
: The pipeline requests data from the specified endpoint inurl
via GET requestPOST
: For output connectors, the pipeline sends data to the specified endpoint via POST request
headers
The list of key-value pairs that define the headers for the request sent to receive data from the specified endpoint
authentication
Defines your refresh token endpoint to obtain a new access token after the previous one has expired. This mechanism is more secure and allows you to re-authenticate every time you need to obtain a new access token
headers
The list of key-value pairs used for requests to the defined refresh endpoint
refresh_url
The endpoint that is called to receive the authentication token
AWS S3 connectors
account_id
The AWS account id
access_key
The AWS secret access key
bucket_name
The name of the S3 bucket
region
The name of the region where the bucket is hosted
directory_path
The path in the S3 bucket where the input file is stored or where the output file should be stored
file_key
The name of the output file, without the file extension, that is stored in the S3 bucket, e.g. "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
For AWS S3 and (S)FTP connectors
advanced_config
Add tags to specifically include or exclude files at the and define how to manage processed input files after the pipeline has run
after_process_option
Define what to do with the processed input files after each pipeline run:
DELETE
: Delete the processed input fileRENAME
: Replace the current file name with the name defined innew_name
UNCHANGE
: Don’t do anything with the processed input file
inclusion_tag
The list of tags (as strings). Only files containing at least one inclusion tag will be processed. If no tags are defined, all files except those excluded by exclusion tags will be processed
exclusion_tag
The list of tags (as strings). Files containing any of the specified exclusion tags will be skipped and not processed. If no exclusion tags are defined and no inclusion tags are specified, all files will be processed
new_name
Define how processed input files should be renamed after the pipeline has run
trigger_type
Define whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Define whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
Payload
Response
Attributes
id
The ID of the connector
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
configuration
Defines the specific setup of your connector based on the type
and node_type
E-mail connectors
recipients
The list of e-mail addresses to which the mapped and transformed data should be sent
filename
The name of the file, without the file extension, that will be sent to the recipient(s), e.g. "my_output"
extension
Allowed file types are:
- XLSX
- XML
- CSV
(S)FTP connectors
host
The host address, e.g., "sftp.domain.com"
port
The port number
protocol
Defines the type of server that you’re using:
FTP
SFTP
username
The (S)FTP username to log into the server
password
The (S)FTP password to log into the server. For SFTP connectors you can also use an SSH private key instead of a password in secret_key
secret_key
The SFTP SSH private key, which can be used as an alternative to a password
directory_path
The path to the directory where files are stored on the (S)FTP server
filename
The name of the file, without the file extension, that nuvo will send, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
HTTP(S) connectors
url
The endpoint that is called to receive data or where processed data is sent. For event-based input connectors, it’s the endpoint from which data is sent
method
REST API method, depending on the type of HTTP connector you’re creating. Currently, you can choose the following options:
GET
: The pipeline requests data from the specified endpoint inurl
via GET requestPOST
: For output connectors, the pipeline sends data to the specified endpoint via POST request
headers
The list of key-value pairs that define the headers for the request sent to receive data from the specified endpoint
authentication
Defines your refresh token endpoint to obtain a new access token after the previous one has expired. This mechanism is more secure and allows you to re-authenticate every time you need to obtain a new access token
headers
The list of key-value pairs used for requests to the defined refresh endpoint
refresh_url
The endpoint that is called to receive the authentication token
AWS S3 connectors
account_id
The AWS account id
access_key
The AWS secret access key
bucket_name
The name of the S3 bucket
region
The name of the region where the bucket is hosted
directory_path
The path in the S3 bucket where the input file is stored or where the output file should be stored
file_key
The name of the output file, without the file extension, that is stored in the S3 bucket, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
For AWS S3 and (S)FTP connectors
advanced_config
Add tags to specifically include or exclude files at the and define how to manage processed input files after the pipeline has run
after_process_option
Define what to do with the processed input files after each pipeline run:
DELETE
: Delete the processed input fileRENAME
: Replace the current file name with the name defined innew_name
UNCHANGE
: Don’t do anything with the processed input file
inclusion_tag
The list of tags (as strings). Only files containing at least one inclusion tag will be processed. If no tags are defined, all files except those excluded by exclusion tags will be processed.
exclusion_tag
The list of tags (as strings). Files containing any of the specified exclusion tags will be skipped and not processed. If no exclusion tags are defined and no inclusion tags are specified, all files will be processed.
new_name
Define how processed input files should be renamed after the pipeline has run
trigger_type
Defines whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Defines whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
created_at
The date and time when the connector was first created
created_by
Information of whom created the connector
id
The ID of the user or sub-organization who created the connector
name
The name of the user or sub-organization who created the connector
identifier
The identifier of the user or sub-organization who created the connector
type
Defines the type of user who created the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
updated_at
The date and time when the connector was last updated
updated_by
Information about whom last updated the connector
id
The ID of the user or sub-organization who last updated the connector
name
The name of the user or sub-organization who last updated the connector
identifier
The identifier of the user or sub-organization who last updated the connector
type
Defines the type of user who last updated the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
Response
Update
Endpoint
Payload
Attributes
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
configuration
Defines the specific setup of your connector based on the type
and node_type
E-mail connectors
recipients
The list of e-mail addresses to which the mapped and transformed data should be sent
filename
The name of the file, without the file extension, that will be sent to the recipient(s), e.g., "my_output"
extension
Allowed file types are:
- XLSX
- XML
- CSV
(S)FTP connectors
host
The host address, e.g., "sftp.domain.com"
port
The port number
protocol
Defines the type of server that you’re using:
FTP
SFTP
username
The (S)FTP username to log into the server
password
The (S)FTP password to log into the server. For SFTP connectors you can also use an SSH private key instead of a password in secret_key
secret_key
The SFTP SSH private key, which can be used as an alternative to a password
directory_path
The path to the directory where files are stored on the (S)FTP server
filename
The name of the file, without the file extension, that nuvo will send, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
HTTP(S) connectors
url
The endpoint that is called to receive data or where processed data is sent. For event-based input connectors, it’s the endpoint from which data is sent
method
REST API method, depending on the type of HTTP connector you’re creating. Currently, you can choose the following options:
GET
: The pipeline requests data from the specified endpoint inurl
via GET requestPOST
: For output connectors, the pipeline sends data to the specified endpoint via POST request
headers
The list of key-value pairs that define the headers for the request sent to receive data from the specified endpoint
authentication
Defines your refresh token endpoint to obtain a new access token after the previous one has expired. This mechanism is more secure and allows you to re-authenticate every time you need to obtain a new access token
headers
The list of key-value pairs used for requests to the defined refresh endpoint
refresh_url
The endpoint that is called to receive the authentication token
AWS S3 connectors
account_id
The AWS account id
access_key
The AWS secret access key
bucket_name
The name of the S3 bucket
region
The name of the region where the bucket is hosted
directory_path
The path in the S3 bucket where the input file is stored or where the output file should be stored
file_key
The name of the output file, without the file extension, that is stored in the S3 bucket, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
For AWS S3 and (S)FTP connectors
advanced_config
Add tags to specifically include or exclude files at the and define how to manage processed input files after the pipeline has run
after_process_option
Define what to do with the processed input files after each pipeline run:
DELETE
: Delete the processed input fileRENAME
: Replace the current file name with the name defined innew_name
UNCHANGE
: Don’t do anything with the processed input file
inclusion_tag
The list of tags (as strings). Only files containing at least one inclusion tag will be processed. If no tags are defined, all files except those excluded by exclusion tags will be processed
exclusion_tag
The list of tags (as strings). Files containing any of the specified exclusion tags will be skipped and not processed. If no exclusion tags are defined and no inclusion tags are specified, all files will be processed
new_name
Define how processed input files should be renamed after the pipeline has run
trigger_type
Defines whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Defines whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
Payload
Response
Attributes
id
The ID of the connector
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
configuration
Defines the specific setup of your connector based on the type
and node_type
E-mail connectors
recipients
The list of e-mail addresses to which the mapped and transformed data should be sent
filename
The name of the file, without the file extension, that will be sent to the recipient(s), e.g. "my_output"
extension
Allowed file types are:
- XLSX
- XML
- CSV
(S)FTP connectors
host
The host address, e.g., "sftp.domain.com"
port
The port number
protocol
Defines the type of server that you’re using:
FTP
SFTP
username
The (S)FTP username to log into the server
password
The (S)FTP password to log into the server. For SFTP connectors you can also use an SSH private key instead of a password in secret_key
secret_key
The SFTP SSH private key, which can be used as an alternative to a password
directory_path
The path to the directory where files are stored on the (S)FTP server
filename
The name of the file, without the file extension, that nuvo will send, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
HTTP(S) connectors
url
The endpoint that is called to receive data or where processed data is sent. For event-based input connectors, it’s the endpoint from which data is sent
method
REST API method, depending on the type of HTTP connector you’re creating. Currently, you can choose the following options:
GET
: The pipeline requests data from the specified endpoint inurl
via GET requestPOST
: For output connectors, the pipeline sends data to the specified endpoint via POST request
headers
The list of key-value pairs that define the headers for the request sent to receive data from the specified endpoint
authentication
Defines your refresh token endpoint to obtain a new access token after the previous one has expired. This mechanism is more secure and allows you to re-authenticate every time you need to obtain a new access token
headers
The list of key-value pairs used for requests to the defined refresh endpoint
refresh_url
The endpoint that is called to receive the authentication token
AWS S3 connectors
account_id
The AWS account id
access_key
The AWS secret access key
bucket_name
The name of the S3 bucket
region
The name of the region where the bucket is hosted
directory_path
The path in the S3 bucket where the input file is stored or where the output file should be stored
file_key
The name of the output file, without the file extension, that is stored in the S3 bucket, e.g. "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
For AWS S3 and (S)FTP connectors
advanced_config
Add tags to specifically include or exclude files at the and define how to manage processed input files after the pipeline has run
after_process_option
Define what to do with the processed input files after each pipeline run:
DELETE
: Delete the processed input fileRENAME
: Replace the current file name with the name defined innew_name
UNCHANGE
: Don’t do anything with the processed input file
inclusion_tag
The list of tags (as strings). Only files containing at least one inclusion tag will be processed. If no tags are defined, all files except those excluded by exclusion tags will be processed
exclusion_tag
The list of tags (as strings). Files containing any of the specified exclusion tags will be skipped and not processed. If no exclusion tags are defined and no inclusion tags are specified, all files will be processed
new_name
Define how processed input files should be renamed after the pipeline has run
trigger_type
Defines whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Defines whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
created_at
The date and time when the connector was first created
created_by
Information about whom created the connector
id
The ID of the user or sub-organization who created the connector
name
The name of the user or sub-organization who created the connector
identifier
The identifier of the user or sub-organization who created the connector
type
Defines the type of user who created the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
updated_at
The date and time when the connector was last updated
updated_by
Information about whom last updated the connector
id
The ID of the user or sub-organization who last updated the connector
name
The name of the user or sub-organization who last updated the connector
identifier
The identifier of the user or sub-organization who last updated the connector
type
Defines the type of user who last updated the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
Response
Read (by ID)
Endpoint
Response
Attributes
id
The connector’s ID, which is, for example, set in pipeline templates to ensure that all pipelines created with this template use the same input/output connector
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
configuration
Defines the specific setup of your connector based on the type
and node_type
E-mail connectors
recipients
The list of e-mail addresses to which the mapped and transformed data should be sent
filename
The name of the file, without the file extension, that will be sent to the recipient(s), e.g. "my_output"
extension
Allowed file types are:
- XLSX
- XML
- CSV
(S)FTP connectors
host
The host address, e.g., "sftp.domain.com"
port
The port number
protocol
Defines the type of server that you’re using:
FTP
SFTP
username
The (S)FTP username to log into the server
password
The (S)FTP password to log into the server. For SFTP connectors you can also use an SSH private key instead of a password in secret_key
secret_key
The SFTP SSH private key, which can be used as an alternative to a password
directory_path
The path to the directory where files are stored on the (S)FTP server
filename
The name of the file, without the file extension, that nuvo will send, e.g., "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
HTTP(S) connectors
url
The endpoint that is called to receive data or where processed data is sent. For event-based input connectors, it’s the endpoint from which data is sent
method
The REST API method, depending on the type of HTTP connector you’re creating. Currently, you can choose the following options:
GET
: The pipeline requests data from the specified endpoint inurl
via GET requestPOST
: For output connectors, the pipeline sends data to the specified endpoint via POST request
headers
The list of key-value pairs that define the headers for the request sent to receive data from the specified endpoint
authentication
Defines your refresh token endpoint to obtain a new access token after the previous one has expired. This mechanism is more secure and allows you to re-authenticate every time you need to obtain a new access token
headers
The list of key-value pairs used for requests to the defined refresh endpoint
refresh_url
The endpoint that is called to receive the authentication token
AWS S3 connectors
account_id
The AWS account id
access_key
The AWS secret access key
bucket_name
The name of the S3 bucket
region
The name of the region where the bucket is hosted
directory_path
The path in the S3 bucket where the input file is stored or where the output file should be stored
file_key
The name of the output file, without the file extension, that is stored in the S3 bucket, e.g. "my_output"
Allowed file types are:
- XLSX
- XML
- CSV
- JSON
For AWS S3 and (S)FTP connectors
advanced_config
Add tags to specifically include or exclude files at the and define how to manage processed input files after the pipeline has run
after_process_option
Define what to do with the processed input files after each pipeline run:
DELETE
: Delete the processed input fileRENAME
: Replace the current file name with the name defined innew_name
UNCHANGE
: Don’t do anything with the processed input file
inclusion_tag
The list of tags (as strings). Only files containing at least one inclusion tag will be processed. If no tags are defined, all files except those excluded by exclusion tags will be processed
exclusion_tag
The list of tags (as strings). Files containing any of the specified exclusion tags will be skipped and not processed. If no exclusion tags are defined and no inclusion tags are specified, all files will be processed
new_name
Define how processed input files should be renamed after the pipeline has run
trigger_type
Defines whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Defines whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
created_at
The date and time when the connector was first created
created_by
Information about whom created the connector
id
The ID of the user or sub-organization who created the connector
name
The name of the user or sub-organization who created the connector
identifier
The identifier of the user or sub-organization who created the connector
type
Defines the type of user who created the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
updated_at
The date and time when the connector was last updated
updated_by
Information about who last updated the connector
id
The ID of the user or sub-organization who last updated the connector
name
The name of the user or sub-organization who last updated the connector
identifier
The identifier of the user or sub-organization who last updated the connector
type
Defines the type of user who last updated the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
Response
Read (all)
To further refine the response you can use query parameters like sort
, filters
, pagination
and options
. Look at a more detailed explanation here.
Endpoint
Response
Attributes
id
The ID of the connector
name
The name of the connector
type
Defines whether the connector is the source of the data to be imported or the target where the data should be sent:
INPUT
: Input connector (source of the data to be imported)OUTPUT
: Output connector (target where the data should be sent)
node_type
Defines the type of connector:
HTTP
: Receives or sends data from/to a specific URL (e.g., web server or REST API)S3
: Receives or sends data from/to an AWS S3 bucketFTP
: Transfers files directly over the network.EMAIL
: Receives files from an email account
trigger_type
Defines whether a pipeline is triggered by sending data to a specified point or if the pipeline runs on a manual trigger or a specified time interval:
TIME_BASE
: Data is requested or sent from/to the connector during every pipeline executionEVENT_BASE
: Data is sent to a defined URL or e-mail address, which triggers a new pipeline execution (Note: E-mail input connectors are always event-based)
permissions
Defines whether the connector should be available to all your sub-organizations or only for internal use
level
PUBLIC
: The connector can also be used by sub-organizationsPRIVATE
: The connector can only be used by users within your organization
created_at
The date and time when the connector was first created
created_by
Information about whom created the connector
id
The ID of the user or sub-organization who created the connector
name
The name of the user or sub-organization who created the connector
identifier
The identifier of the user or sub-organization who created the connector
type
Defines the type of user who created the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
updated_at
The date and time when the connector was last updated
updated_by
Information about who last updated the connector
id
The ID of the user or sub-organization who last updated the connector
name
The name of the user or sub-organization who last updated the connector
identifier
The identifier of the user or sub-organization who last updated the connector
type
Defines the type of user who last updated the connector:
USER
: A user of your organizationSUB_ORG
: A sub-organization that is part of your organization
pagination
An object containing metadata about the result
total
The number of entries in the data array
offset
The offset set in the request parameters
limit
The limit set in the request parameters
Response
Delete
Endpoint
Attributes
message
Message confirming the deletion of the connector or providing an error message
Response