Connectors
Connectors are the beginning and the end of all nuvo pipelines. Input connectors define where the input data is coming from, the source, and output connectors define where the mapped and transformed data is going to, the destination.
Currently, we offer these types of connectors:
- HTTP(S)
- (S)FTP
- AWS S3
- Azure Blob Storage
- more coming soon
Each of these connector types can be set as an input and/or an output connector.
You can create, read (all/by id), update and delete connectors using:
- the connector API
- the connector embeddables (coming soon)
- the nuvo user platform (coming soon)
When sending XLSX or CSV files only the first sheet will be processed. When sending XML files with multiple tables, only the table with the most columns and lowest index will be processed.
HTTP(S)
An HTTP(S) endpoint is a specific URL on a web server where a client can access a particular resource or send data.
Input Connector
The HTTP input connector allows your data pipeline to receive data from an HTTP source, such as a web server or a REST API. This connector can accept HTTP requests with various HTTP methods such as GET and POST.
For the HTTP input connector, you will need to specify the URL endpoint from which data will be received, as well as any required authentication or authorization mechanisms. You will also need to specify the SSL/TLS configuration, such as the certificate and private key, to establish the secure connection.
HTTP(S) input connectors can be:
- Time-based: The connector is requested on the specified schedule or when the pipeline is executed manually (”nuvo requests you”)
- Event-based: You send data via the connector to nuvo, which triggers the execution of the pipeline (”Request to nuvo”)
Event-based input connectors should not be used for multiple pipelines across different sub-organizations.
The supported file types:
- JSON (max file size: 4.7 MB; max body size: 6 MB)
- XML (max file size: 4.7 MB; max body size: 6 MB)
- XLS(X) (max size: 4.7 MB)
- CSV (max size: 4.7 MB)
Example
- Learn more about creating HTTP(S) input connectors here
Output Connector
The HTTP output connector allows your data pipeline to send data to an HTTP destination, such as a web server or a REST API. This connector can send HTTP requests with HTTP method such as POST.
For the HTTPS output connector, you will need to specify the URL endpoint to which data will be sent, as well as any required authentication or authorization mechanisms. You will also need to specify the SSL/TLS configuration, such as the certificate and private key, to establish the secure connection.
For all HTTP(S) output connectors, the output data will be sent as a flat JSON.
Example
- Learn more about creating HTTP(S) output connectors here
(S)FTP
FTP (File Transfer Protocol) and SFTP (Secure File Transfer Protocol) are both protocols for transferring files over a network.
The supported file types for input/output connectors are:
- XLS(X)
- CSV
- JSON
- XML
- TSV
Input Connector
When you create an (S)FTP input connector, you can define the directory path from which the input files should be read. You can also use inclusion and exclusion tags to define the criteria for which files are or aren’t sent to the pipeline from the set directory path.
Additionally, you can define what should happen to the files once they are sent to the pipeline. For example, it’s possible to rename or delete the files. Of course, you can also choose to leave the files unchanged.
When a pipeline with an (S)FTP input connector is triggered and multiple files at the directory path meet the criteria, nuvo will create an execution for each file.
(S)FTP input connectors can be:
- Time-based: The endpoint is requested on the specified schedule or when the pipeline is executed manually
Example
- Learn more about creating (S)FTP input connectors here
Output Connector
When you create an (S)FTP output connector, you can define the directory path where the file should be saved. Additionally, you can define the file name, which can be dynamic (e.g., include the timestamp or the original input file name), as well as the file type.
SFTP connectors provide the same functionality as FTP connectors but add secure, encrypted communication between the data pipeline and the destination. This is achieved using SSL/TLS encryption to protect the transmitted data.
Example
- Learn more about creating (S)FTP output connectors here
AWS S3
Amazon S3 (Simple Storage Service) is a cloud-based object storage service provided by Amazon Web Services (AWS). The AWS S3 input connector allows your data pipeline to read data from an Amazon S3 bucket. The AWS S3 output connector allows your data pipeline to write data to an Amazon S3 bucket.
The supported file types are:
- XLS(X)
- CSV
- JSON
- XML
- TSV
Input Connector
When you create an S3 input connector, you can define the S3 bucket from which the input files should be read. You can also use inclusion and exclusion tags to define the criteria for which files are or aren’t sent to the pipeline from the designated S3 bucket.
Additionally, you can define what should happen to the files once they are sent to the pipeline. For example, it’s possible to rename or delete the files. Of course, you can also choose to leave the files unchanged.
When a pipeline with in AWS S3 input connector is triggered and multiple files at the designated S3 bucket meet the criteria, nuvo will create an execution for each file.
S3 input connectors can be:
- Time-based: The endpoint is requested on the specified schedule or when the pipeline is executed manually
Example
- Learn more about creating S3 input connectors here
Output Connector
When you create an S3 output connector you can define the directory path where the file should be saved at. Additionally, you can define the file name, which can be dynamic (e.g., include the timestamp or the original input file name), as well as the file type.
Example
- Learn more about creating S3 output connectors here
Azure Blob Storage
Azure Blob Storage is Microsoft’s cloud-based object storage solution, optimized for storing massive amounts of unstructured data, such as text or binary data, that doesn’t adhere to a specific data model.
The Azure Blob Storage input connector allows your data pipeline to read data from an Azure Blob Storage container, while the Azure Blob Storage output connector enables your data pipeline to write data to it.
Currently, we support the following two authentication clients:
- Shared access signature (SAS)
- Shared key authorization
The supported file types are:
- XLS(X)
- CSV
- JSON
- XML
- TSV
Input Connector
When you create an Azure Blob Storage input connector, you can define the Azure Blob Storage container from which the input files should be read. You can also use inclusion and exclusion tags to define the criteria for which files are or aren’t sent to the pipeline from the designated Azure Blob Storage container.
Additionally, you can define what should happen to the files once they are sent to the pipeline. For example, it’s possible to rename or delete the files. Of course, you can also choose to leave the files unchanged.
When a pipeline with an Azure Blob Storage input connector is triggered and multiple files at the designated Azure Blob Storage container meet the criteria, nuvo will create an execution for each file.
Azure input connectors can be:
- Time-based: The endpoint is requested on the specified schedule or when the pipeline is executed manually
Example
- Learn more about creating Azure Blob Storage input connectors here
Output Connector
When you create an Azure Blob Storage output connector, you can define the directory path where the file should be saved. Additionally, you can define the file name, which can be dynamic (e.g., include the timestamp or the original input file name), as well as the file type.
Example
- Learn more about creating Azure Blob Storage output connectors here
E-Mail
Used to receive input data from, or send output data to, specified email addresses.
The supported file types are:
- XLS(X) (max file size: 25 MB)
- CSV (max file size: 25 MB)
- JSON (max file size: 25 MB)
- XML (max file size: 25 MB)
- TSV (max file size: 25 MB)
Input Connector
E-Mail input connectors can be:
- Event-based: The data is sent to nuvo via email, which triggers the execution of the pipeline
We’ll provide you with an email address to send your input data to. Always send only one file. If you send multiple files we'll always use the first file for the pipeline execution.
Event-based input connectors should not be used for multiple pipelines with different sub-organizations.
When creating an input connector or pipeline, we’ll show you a test email address for the initial setup process. Once the connector or pipeline is created, we’ll provide you with another email address for executing the pipeline.
Example
- Learn more about creating email input connectors here
Output Connector
When you create an email output connector, you can select one or multiple email addresses to send the output data to. Additionally, you can define the file name, which can be dynamic (e.g., include the timestamp or the original input file name), as well as the file type.
Example
- Learn more about creating email output connectors here